00:00:00.001 Started by upstream project "autotest-per-patch" build number 122820 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.041 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.065 Fetching changes from the remote Git repository 00:00:00.070 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.096 Using shallow fetch with depth 1 00:00:00.096 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.096 > git --version # timeout=10 00:00:00.137 > git --version # 'git version 2.39.2' 00:00:00.137 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.138 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.138 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.341 ERROR: Error fetching remote repo 'origin' 00:00:09.341 hudson.plugins.git.GitException: Failed to fetch from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:09.341 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:999) 00:00:09.341 at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1241) 00:00:09.341 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1305) 00:00:09.341 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:129) 00:00:09.341 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:165) 00:00:09.341 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:71) 00:00:09.341 at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:311) 00:00:09.341 at hudson.model.ResourceController.execute(ResourceController.java:101) 00:00:09.341 at hudson.model.Executor.run(Executor.java:442) 00:00:09.341 Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master" returned status code 128: 00:00:09.341 stdout: 00:00:09.341 stderr: fatal: unable to access 'https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool/': server certificate verification failed. CAfile: none CRLfile: none 00:00:09.341 00:00:09.341 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2842) 00:00:09.341 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:2185) 00:00:09.341 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:635) 00:00:09.341 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:997) 00:00:09.341 ... 8 more 00:00:09.341 ERROR: Error fetching remote repo 'origin' 00:00:09.341 Retrying after 10 seconds 00:00:19.342 The recommended git tool is: git 00:00:19.342 using credential 00000000-0000-0000-0000-000000000002 00:00:19.343 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:19.369 Fetching changes from the remote Git repository 00:00:19.371 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:19.406 Using shallow fetch with depth 1 00:00:19.406 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:19.406 > git --version # timeout=10 00:00:19.436 > git --version # 'git version 2.39.2' 00:00:19.436 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:19.437 Setting http proxy: proxy-dmz.intel.com:911 00:00:19.437 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:02:16.144 ERROR: Error fetching remote repo 'origin' 00:02:16.144 hudson.plugins.git.GitException: Failed to fetch from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:02:16.144 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:999) 00:02:16.144 at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1241) 00:02:16.144 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1305) 00:02:16.144 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:129) 00:02:16.144 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:165) 00:02:16.144 at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:71) 00:02:16.144 at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:311) 00:02:16.144 at hudson.model.ResourceController.execute(ResourceController.java:101) 00:02:16.144 at hudson.model.Executor.run(Executor.java:442) 00:02:16.144 Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master" returned status code 128: 00:02:16.144 stdout: 00:02:16.144 stderr: fatal: unable to access 'https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool/': CONNECT tunnel failed, response 500 00:02:16.144 00:02:16.144 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2842) 00:02:16.144 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:2185) 00:02:16.144 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:635) 00:02:16.144 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:997) 00:02:16.144 ... 8 more 00:02:16.144 ERROR: Error fetching remote repo 'origin' 00:02:16.144 Retrying after 10 seconds 00:02:26.145 The recommended git tool is: git 00:02:26.145 using credential 00000000-0000-0000-0000-000000000002 00:02:26.147 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:02:26.163 Fetching changes from the remote Git repository 00:02:26.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:02:26.180 Using shallow fetch with depth 1 00:02:26.180 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:02:26.180 > git --version # timeout=10 00:02:26.198 > git --version # 'git version 2.39.2' 00:02:26.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:02:26.198 Setting http proxy: proxy-dmz.intel.com:911 00:02:26.198 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:02:29.222 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:02:29.234 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:02:29.245 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:02:29.245 > git config core.sparsecheckout # timeout=10 00:02:29.254 > git read-tree -mu HEAD # timeout=10 00:02:29.279 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:02:29.319 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:02:29.319 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:02:29.414 [Pipeline] Start of Pipeline 00:02:29.427 [Pipeline] library 00:02:29.429 Loading library shm_lib@master 00:02:29.429 Library shm_lib@master is cached. Copying from home. 00:02:29.445 [Pipeline] node 00:02:29.450 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:29.455 [Pipeline] { 00:02:29.468 [Pipeline] catchError 00:02:29.469 [Pipeline] { 00:02:29.480 [Pipeline] wrap 00:02:29.489 [Pipeline] { 00:02:29.494 [Pipeline] stage 00:02:29.495 [Pipeline] { (Prologue) 00:02:29.634 [Pipeline] sh 00:02:29.915 + logger -p user.info -t JENKINS-CI 00:02:29.934 [Pipeline] echo 00:02:29.935 Node: GP2 00:02:29.944 [Pipeline] sh 00:02:30.242 [Pipeline] setCustomBuildProperty 00:02:30.255 [Pipeline] echo 00:02:30.256 Cleanup processes 00:02:30.260 [Pipeline] sh 00:02:30.536 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.536 3850169 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.548 [Pipeline] sh 00:02:30.828 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.828 ++ grep -v 'sudo pgrep' 00:02:30.828 ++ awk '{print $1}' 00:02:30.828 + sudo kill -9 00:02:30.828 + true 00:02:30.841 [Pipeline] cleanWs 00:02:30.850 [WS-CLEANUP] Deleting project workspace... 00:02:30.850 [WS-CLEANUP] Deferred wipeout is used... 00:02:30.857 [WS-CLEANUP] done 00:02:30.860 [Pipeline] setCustomBuildProperty 00:02:30.872 [Pipeline] sh 00:02:31.154 + sudo git config --global --replace-all safe.directory '*' 00:02:31.212 [Pipeline] nodesByLabel 00:02:31.213 Found a total of 1 nodes with the 'sorcerer' label 00:02:31.222 [Pipeline] httpRequest 00:02:31.227 HttpMethod: GET 00:02:31.227 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:02:31.232 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:02:31.240 Response Code: HTTP/1.1 200 OK 00:02:31.240 Success: Status code 200 is in the accepted range: 200,404 00:02:31.241 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:02:33.464 [Pipeline] sh 00:02:33.743 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:02:33.761 [Pipeline] httpRequest 00:02:33.766 HttpMethod: GET 00:02:33.766 URL: http://10.211.164.101/packages/spdk_c06b0c79b5391d1ba714f7359f725ff01448da34.tar.gz 00:02:33.768 Sending request to url: http://10.211.164.101/packages/spdk_c06b0c79b5391d1ba714f7359f725ff01448da34.tar.gz 00:02:33.771 Response Code: HTTP/1.1 200 OK 00:02:33.772 Success: Status code 200 is in the accepted range: 200,404 00:02:33.772 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c06b0c79b5391d1ba714f7359f725ff01448da34.tar.gz 00:02:54.725 [Pipeline] sh 00:02:55.006 + tar --no-same-owner -xf spdk_c06b0c79b5391d1ba714f7359f725ff01448da34.tar.gz 00:02:58.302 [Pipeline] sh 00:02:58.600 + git -C spdk log --oneline -n5 00:02:58.600 c06b0c79b nvmf: make allow_any_host its own byte 00:02:58.600 297733650 nvmf: don't touch subsystem->flags.allow_any_host directly 00:02:58.600 35948d8fa build: rename SPDK_MOCK_SYSCALLS -> SPDK_MOCK_SYMBOLS 00:02:58.600 69872294e nvme: make spdk_nvme_dhchap_get_digest_length() public 00:02:58.600 67ab645cd nvmf/auth: send AUTH_failure1 message 00:02:58.636 [Pipeline] } 00:02:58.645 [Pipeline] // stage 00:02:58.650 [Pipeline] stage 00:02:58.651 [Pipeline] { (Prepare) 00:02:58.660 [Pipeline] writeFile 00:02:58.671 [Pipeline] sh 00:02:58.949 + logger -p user.info -t JENKINS-CI 00:02:58.972 [Pipeline] sh 00:02:59.255 + logger -p user.info -t JENKINS-CI 00:02:59.267 [Pipeline] sh 00:02:59.549 + cat autorun-spdk.conf 00:02:59.549 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.549 SPDK_TEST_NVMF=1 00:02:59.549 SPDK_TEST_NVME_CLI=1 00:02:59.549 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:59.549 SPDK_TEST_NVMF_NICS=e810 00:02:59.549 SPDK_TEST_VFIOUSER=1 00:02:59.549 SPDK_RUN_UBSAN=1 00:02:59.549 NET_TYPE=phy 00:02:59.562 RUN_NIGHTLY=0 00:02:59.567 [Pipeline] readFile 00:02:59.588 [Pipeline] withEnv 00:02:59.590 [Pipeline] { 00:02:59.602 [Pipeline] sh 00:02:59.883 + set -ex 00:02:59.883 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:59.883 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:59.883 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.883 ++ SPDK_TEST_NVMF=1 00:02:59.883 ++ SPDK_TEST_NVME_CLI=1 00:02:59.883 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:59.883 ++ SPDK_TEST_NVMF_NICS=e810 00:02:59.883 ++ SPDK_TEST_VFIOUSER=1 00:02:59.883 ++ SPDK_RUN_UBSAN=1 00:02:59.883 ++ NET_TYPE=phy 00:02:59.883 ++ RUN_NIGHTLY=0 00:02:59.883 + case $SPDK_TEST_NVMF_NICS in 00:02:59.883 + DRIVERS=ice 00:02:59.883 + [[ tcp == \r\d\m\a ]] 00:02:59.883 + [[ -n ice ]] 00:02:59.883 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:59.883 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:59.883 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:59.883 rmmod: ERROR: Module irdma is not currently loaded 00:02:59.883 rmmod: ERROR: Module i40iw is not currently loaded 00:02:59.883 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:59.883 + true 00:02:59.883 + for D in $DRIVERS 00:02:59.883 + sudo modprobe ice 00:02:59.883 + exit 0 00:02:59.893 [Pipeline] } 00:02:59.910 [Pipeline] // withEnv 00:02:59.915 [Pipeline] } 00:02:59.930 [Pipeline] // stage 00:02:59.939 [Pipeline] catchError 00:02:59.941 [Pipeline] { 00:02:59.978 [Pipeline] timeout 00:02:59.978 Timeout set to expire in 40 min 00:02:59.980 [Pipeline] { 00:02:59.994 [Pipeline] stage 00:02:59.996 [Pipeline] { (Tests) 00:03:00.010 [Pipeline] sh 00:03:00.294 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:00.294 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:00.294 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:00.294 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:00.294 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.294 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:00.294 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:00.294 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:00.294 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:00.294 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:00.294 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:00.294 + source /etc/os-release 00:03:00.294 ++ NAME='Fedora Linux' 00:03:00.294 ++ VERSION='38 (Cloud Edition)' 00:03:00.294 ++ ID=fedora 00:03:00.294 ++ VERSION_ID=38 00:03:00.294 ++ VERSION_CODENAME= 00:03:00.294 ++ PLATFORM_ID=platform:f38 00:03:00.294 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:00.294 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:00.294 ++ LOGO=fedora-logo-icon 00:03:00.294 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:00.294 ++ HOME_URL=https://fedoraproject.org/ 00:03:00.294 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:00.294 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:00.294 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:00.294 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:00.294 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:00.294 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:00.294 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:00.294 ++ SUPPORT_END=2024-05-14 00:03:00.294 ++ VARIANT='Cloud Edition' 00:03:00.294 ++ VARIANT_ID=cloud 00:03:00.294 + uname -a 00:03:00.294 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:00.294 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:01.230 Hugepages 00:03:01.230 node hugesize free / total 00:03:01.230 node0 1048576kB 0 / 0 00:03:01.230 node0 2048kB 0 / 0 00:03:01.230 node1 1048576kB 0 / 0 00:03:01.230 node1 2048kB 0 / 0 00:03:01.230 00:03:01.230 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.230 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:03:01.230 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:03:01.230 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:03:01.230 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:03:01.230 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:03:01.230 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:03:01.230 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:03:01.230 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:03:01.230 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:03:01.230 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:03:01.230 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:03:01.230 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:03:01.230 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:03:01.230 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:03:01.230 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:03:01.230 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:03:01.230 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:01.230 + rm -f /tmp/spdk-ld-path 00:03:01.230 + source autorun-spdk.conf 00:03:01.230 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.230 ++ SPDK_TEST_NVMF=1 00:03:01.230 ++ SPDK_TEST_NVME_CLI=1 00:03:01.230 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:01.230 ++ SPDK_TEST_NVMF_NICS=e810 00:03:01.230 ++ SPDK_TEST_VFIOUSER=1 00:03:01.230 ++ SPDK_RUN_UBSAN=1 00:03:01.230 ++ NET_TYPE=phy 00:03:01.230 ++ RUN_NIGHTLY=0 00:03:01.230 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:01.230 + [[ -n '' ]] 00:03:01.230 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.230 + for M in /var/spdk/build-*-manifest.txt 00:03:01.230 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:01.230 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:01.230 + for M in /var/spdk/build-*-manifest.txt 00:03:01.230 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:01.230 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:01.230 ++ uname 00:03:01.230 + [[ Linux == \L\i\n\u\x ]] 00:03:01.230 + sudo dmesg -T 00:03:01.230 + sudo dmesg --clear 00:03:01.230 + dmesg_pid=3850728 00:03:01.230 + [[ Fedora Linux == FreeBSD ]] 00:03:01.230 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.230 + sudo dmesg -Tw 00:03:01.230 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.230 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:01.230 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:01.230 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:01.230 + [[ -x /usr/src/fio-static/fio ]] 00:03:01.230 + export FIO_BIN=/usr/src/fio-static/fio 00:03:01.230 + FIO_BIN=/usr/src/fio-static/fio 00:03:01.230 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:01.230 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:01.230 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:01.230 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.230 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.231 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:01.231 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.231 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.231 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:01.231 Test configuration: 00:03:01.231 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.231 SPDK_TEST_NVMF=1 00:03:01.231 SPDK_TEST_NVME_CLI=1 00:03:01.231 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:01.231 SPDK_TEST_NVMF_NICS=e810 00:03:01.231 SPDK_TEST_VFIOUSER=1 00:03:01.231 SPDK_RUN_UBSAN=1 00:03:01.231 NET_TYPE=phy 00:03:01.231 RUN_NIGHTLY=0 00:39:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:01.231 00:39:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:01.231 00:39:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.231 00:39:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.231 00:39:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.231 00:39:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.231 00:39:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.231 00:39:48 -- paths/export.sh@5 -- $ export PATH 00:03:01.231 00:39:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.231 00:39:48 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:01.231 00:39:48 -- common/autobuild_common.sh@437 -- $ date +%s 00:03:01.231 00:39:48 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715726388.XXXXXX 00:03:01.231 00:39:48 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715726388.QiQFSp 00:03:01.231 00:39:48 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:03:01.231 00:39:48 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:03:01.231 00:39:48 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:01.231 00:39:48 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:01.231 00:39:48 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:01.231 00:39:48 -- common/autobuild_common.sh@453 -- $ get_config_params 00:03:01.231 00:39:48 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:03:01.231 00:39:48 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.491 00:39:48 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:01.491 00:39:48 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:03:01.491 00:39:48 -- pm/common@17 -- $ local monitor 00:03:01.491 00:39:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.491 00:39:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.491 00:39:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.491 00:39:48 -- pm/common@21 -- $ date +%s 00:03:01.491 00:39:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.491 00:39:48 -- pm/common@21 -- $ date +%s 00:03:01.491 00:39:48 -- pm/common@25 -- $ sleep 1 00:03:01.491 00:39:48 -- pm/common@21 -- $ date +%s 00:03:01.491 00:39:48 -- pm/common@21 -- $ date +%s 00:03:01.491 00:39:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726388 00:03:01.491 00:39:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726388 00:03:01.491 00:39:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726388 00:03:01.491 00:39:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726388 00:03:01.491 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726388_collect-vmstat.pm.log 00:03:01.491 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726388_collect-cpu-load.pm.log 00:03:01.491 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726388_collect-cpu-temp.pm.log 00:03:01.491 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726388_collect-bmc-pm.bmc.pm.log 00:03:02.430 00:39:49 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:03:02.430 00:39:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:02.430 00:39:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:02.430 00:39:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.430 00:39:49 -- spdk/autobuild.sh@16 -- $ date -u 00:03:02.430 Tue May 14 10:39:49 PM UTC 2024 00:03:02.430 00:39:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:02.430 v24.05-pre-624-gc06b0c79b 00:03:02.430 00:39:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:02.430 00:39:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:02.430 00:39:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:02.430 00:39:49 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:02.430 00:39:49 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:02.430 00:39:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.430 ************************************ 00:03:02.430 START TEST ubsan 00:03:02.430 ************************************ 00:03:02.430 00:39:49 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:03:02.430 using ubsan 00:03:02.430 00:03:02.430 real 0m0.000s 00:03:02.430 user 0m0.000s 00:03:02.430 sys 0m0.000s 00:03:02.430 00:39:49 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:02.430 00:39:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:02.430 ************************************ 00:03:02.430 END TEST ubsan 00:03:02.430 ************************************ 00:03:02.430 00:39:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:02.430 00:39:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:02.430 00:39:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:02.430 00:39:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:02.430 00:39:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:02.430 00:39:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:02.430 00:39:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:02.430 00:39:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:02.430 00:39:49 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:02.430 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:02.430 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:02.999 Using 'verbs' RDMA provider 00:03:13.548 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:25.754 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:25.754 Creating mk/config.mk...done. 00:03:25.754 Creating mk/cc.flags.mk...done. 00:03:25.754 Type 'make' to build. 00:03:25.754 00:40:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:03:25.754 00:40:11 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:25.754 00:40:11 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:25.754 00:40:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:25.754 ************************************ 00:03:25.754 START TEST make 00:03:25.754 ************************************ 00:03:25.754 00:40:11 make -- common/autotest_common.sh@1121 -- $ make -j32 00:03:25.754 make[1]: Nothing to be done for 'all'. 00:03:26.018 The Meson build system 00:03:26.018 Version: 1.3.1 00:03:26.018 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:26.018 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:26.018 Build type: native build 00:03:26.018 Project name: libvfio-user 00:03:26.018 Project version: 0.0.1 00:03:26.018 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:26.018 C linker for the host machine: cc ld.bfd 2.39-16 00:03:26.018 Host machine cpu family: x86_64 00:03:26.018 Host machine cpu: x86_64 00:03:26.018 Run-time dependency threads found: YES 00:03:26.018 Library dl found: YES 00:03:26.018 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:26.018 Run-time dependency json-c found: YES 0.17 00:03:26.018 Run-time dependency cmocka found: YES 1.1.7 00:03:26.018 Program pytest-3 found: NO 00:03:26.018 Program flake8 found: NO 00:03:26.019 Program misspell-fixer found: NO 00:03:26.019 Program restructuredtext-lint found: NO 00:03:26.019 Program valgrind found: YES (/usr/bin/valgrind) 00:03:26.019 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:26.019 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:26.019 Compiler for C supports arguments -Wwrite-strings: YES 00:03:26.019 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:26.019 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:26.019 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:26.019 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:26.019 Build targets in project: 8 00:03:26.019 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:26.019 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:26.019 00:03:26.019 libvfio-user 0.0.1 00:03:26.019 00:03:26.019 User defined options 00:03:26.019 buildtype : debug 00:03:26.019 default_library: shared 00:03:26.019 libdir : /usr/local/lib 00:03:26.019 00:03:26.019 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:26.968 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:27.227 [1/37] Compiling C object samples/null.p/null.c.o 00:03:27.227 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:27.227 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:27.227 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:27.227 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:27.227 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:27.227 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:27.227 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:27.227 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:27.227 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:27.227 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:27.227 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:27.227 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:27.227 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:27.227 [15/37] Compiling C object samples/server.p/server.c.o 00:03:27.227 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:27.227 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:27.227 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:27.227 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:27.227 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:27.227 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:27.227 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:27.227 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:27.227 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:27.227 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:27.497 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:27.497 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:03:27.497 [28/37] Compiling C object samples/client.p/client.c.o 00:03:27.497 [29/37] Linking target samples/client 00:03:27.497 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:27.497 [31/37] Linking target test/unit_tests 00:03:27.757 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:27.757 [33/37] Linking target samples/gpio-pci-idio-16 00:03:27.757 [34/37] Linking target samples/null 00:03:27.757 [35/37] Linking target samples/server 00:03:27.757 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:27.757 [37/37] Linking target samples/lspci 00:03:27.757 INFO: autodetecting backend as ninja 00:03:27.757 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:27.757 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:28.703 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:28.703 ninja: no work to do. 00:03:35.278 The Meson build system 00:03:35.278 Version: 1.3.1 00:03:35.278 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:35.278 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:35.278 Build type: native build 00:03:35.278 Program cat found: YES (/usr/bin/cat) 00:03:35.278 Project name: DPDK 00:03:35.278 Project version: 23.11.0 00:03:35.278 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:35.278 C linker for the host machine: cc ld.bfd 2.39-16 00:03:35.278 Host machine cpu family: x86_64 00:03:35.278 Host machine cpu: x86_64 00:03:35.278 Message: ## Building in Developer Mode ## 00:03:35.278 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:35.278 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:35.278 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:35.278 Program python3 found: YES (/usr/bin/python3) 00:03:35.278 Program cat found: YES (/usr/bin/cat) 00:03:35.278 Compiler for C supports arguments -march=native: YES 00:03:35.278 Checking for size of "void *" : 8 00:03:35.278 Checking for size of "void *" : 8 (cached) 00:03:35.278 Library m found: YES 00:03:35.278 Library numa found: YES 00:03:35.278 Has header "numaif.h" : YES 00:03:35.278 Library fdt found: NO 00:03:35.278 Library execinfo found: NO 00:03:35.278 Has header "execinfo.h" : YES 00:03:35.278 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:35.278 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:35.278 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:35.278 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:35.278 Run-time dependency openssl found: YES 3.0.9 00:03:35.278 Run-time dependency libpcap found: YES 1.10.4 00:03:35.278 Has header "pcap.h" with dependency libpcap: YES 00:03:35.278 Compiler for C supports arguments -Wcast-qual: YES 00:03:35.278 Compiler for C supports arguments -Wdeprecated: YES 00:03:35.278 Compiler for C supports arguments -Wformat: YES 00:03:35.278 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:35.278 Compiler for C supports arguments -Wformat-security: NO 00:03:35.278 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:35.278 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:35.278 Compiler for C supports arguments -Wnested-externs: YES 00:03:35.278 Compiler for C supports arguments -Wold-style-definition: YES 00:03:35.278 Compiler for C supports arguments -Wpointer-arith: YES 00:03:35.279 Compiler for C supports arguments -Wsign-compare: YES 00:03:35.279 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:35.279 Compiler for C supports arguments -Wundef: YES 00:03:35.279 Compiler for C supports arguments -Wwrite-strings: YES 00:03:35.279 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:35.279 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:35.279 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:35.279 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:35.279 Program objdump found: YES (/usr/bin/objdump) 00:03:35.279 Compiler for C supports arguments -mavx512f: YES 00:03:35.279 Checking if "AVX512 checking" compiles: YES 00:03:35.279 Fetching value of define "__SSE4_2__" : 1 00:03:35.279 Fetching value of define "__AES__" : 1 00:03:35.279 Fetching value of define "__AVX__" : 1 00:03:35.279 Fetching value of define "__AVX2__" : (undefined) 00:03:35.279 Fetching value of define "__AVX512BW__" : (undefined) 00:03:35.279 Fetching value of define "__AVX512CD__" : (undefined) 00:03:35.279 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:35.279 Fetching value of define "__AVX512F__" : (undefined) 00:03:35.279 Fetching value of define "__AVX512VL__" : (undefined) 00:03:35.279 Fetching value of define "__PCLMUL__" : 1 00:03:35.279 Fetching value of define "__RDRND__" : (undefined) 00:03:35.279 Fetching value of define "__RDSEED__" : (undefined) 00:03:35.279 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:35.279 Fetching value of define "__znver1__" : (undefined) 00:03:35.279 Fetching value of define "__znver2__" : (undefined) 00:03:35.279 Fetching value of define "__znver3__" : (undefined) 00:03:35.279 Fetching value of define "__znver4__" : (undefined) 00:03:35.279 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:35.279 Message: lib/log: Defining dependency "log" 00:03:35.279 Message: lib/kvargs: Defining dependency "kvargs" 00:03:35.279 Message: lib/telemetry: Defining dependency "telemetry" 00:03:35.279 Checking for function "getentropy" : NO 00:03:35.279 Message: lib/eal: Defining dependency "eal" 00:03:35.279 Message: lib/ring: Defining dependency "ring" 00:03:35.279 Message: lib/rcu: Defining dependency "rcu" 00:03:35.279 Message: lib/mempool: Defining dependency "mempool" 00:03:35.279 Message: lib/mbuf: Defining dependency "mbuf" 00:03:35.279 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:35.279 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:35.279 Compiler for C supports arguments -mpclmul: YES 00:03:35.279 Compiler for C supports arguments -maes: YES 00:03:35.279 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:35.279 Compiler for C supports arguments -mavx512bw: YES 00:03:35.279 Compiler for C supports arguments -mavx512dq: YES 00:03:35.279 Compiler for C supports arguments -mavx512vl: YES 00:03:35.279 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:35.279 Compiler for C supports arguments -mavx2: YES 00:03:35.279 Compiler for C supports arguments -mavx: YES 00:03:35.279 Message: lib/net: Defining dependency "net" 00:03:35.279 Message: lib/meter: Defining dependency "meter" 00:03:35.279 Message: lib/ethdev: Defining dependency "ethdev" 00:03:35.279 Message: lib/pci: Defining dependency "pci" 00:03:35.279 Message: lib/cmdline: Defining dependency "cmdline" 00:03:35.279 Message: lib/hash: Defining dependency "hash" 00:03:35.279 Message: lib/timer: Defining dependency "timer" 00:03:35.279 Message: lib/compressdev: Defining dependency "compressdev" 00:03:35.279 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:35.279 Message: lib/dmadev: Defining dependency "dmadev" 00:03:35.279 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:35.279 Message: lib/power: Defining dependency "power" 00:03:35.279 Message: lib/reorder: Defining dependency "reorder" 00:03:35.279 Message: lib/security: Defining dependency "security" 00:03:35.279 Has header "linux/userfaultfd.h" : YES 00:03:35.279 Has header "linux/vduse.h" : YES 00:03:35.279 Message: lib/vhost: Defining dependency "vhost" 00:03:35.279 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:35.279 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:35.279 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:35.279 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:35.279 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:35.279 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:35.279 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:35.279 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:35.279 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:35.279 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:35.279 Program doxygen found: YES (/usr/bin/doxygen) 00:03:35.279 Configuring doxy-api-html.conf using configuration 00:03:35.279 Configuring doxy-api-man.conf using configuration 00:03:35.279 Program mandb found: YES (/usr/bin/mandb) 00:03:35.279 Program sphinx-build found: NO 00:03:35.279 Configuring rte_build_config.h using configuration 00:03:35.279 Message: 00:03:35.279 ================= 00:03:35.279 Applications Enabled 00:03:35.279 ================= 00:03:35.279 00:03:35.279 apps: 00:03:35.279 00:03:35.279 00:03:35.279 Message: 00:03:35.279 ================= 00:03:35.279 Libraries Enabled 00:03:35.279 ================= 00:03:35.279 00:03:35.279 libs: 00:03:35.279 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:35.279 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:35.279 cryptodev, dmadev, power, reorder, security, vhost, 00:03:35.279 00:03:35.279 Message: 00:03:35.279 =============== 00:03:35.279 Drivers Enabled 00:03:35.279 =============== 00:03:35.279 00:03:35.279 common: 00:03:35.279 00:03:35.279 bus: 00:03:35.279 pci, vdev, 00:03:35.279 mempool: 00:03:35.279 ring, 00:03:35.279 dma: 00:03:35.279 00:03:35.279 net: 00:03:35.279 00:03:35.279 crypto: 00:03:35.279 00:03:35.279 compress: 00:03:35.279 00:03:35.279 vdpa: 00:03:35.279 00:03:35.279 00:03:35.279 Message: 00:03:35.279 ================= 00:03:35.279 Content Skipped 00:03:35.279 ================= 00:03:35.279 00:03:35.279 apps: 00:03:35.279 dumpcap: explicitly disabled via build config 00:03:35.279 graph: explicitly disabled via build config 00:03:35.279 pdump: explicitly disabled via build config 00:03:35.279 proc-info: explicitly disabled via build config 00:03:35.279 test-acl: explicitly disabled via build config 00:03:35.279 test-bbdev: explicitly disabled via build config 00:03:35.279 test-cmdline: explicitly disabled via build config 00:03:35.279 test-compress-perf: explicitly disabled via build config 00:03:35.279 test-crypto-perf: explicitly disabled via build config 00:03:35.279 test-dma-perf: explicitly disabled via build config 00:03:35.279 test-eventdev: explicitly disabled via build config 00:03:35.279 test-fib: explicitly disabled via build config 00:03:35.279 test-flow-perf: explicitly disabled via build config 00:03:35.279 test-gpudev: explicitly disabled via build config 00:03:35.279 test-mldev: explicitly disabled via build config 00:03:35.279 test-pipeline: explicitly disabled via build config 00:03:35.279 test-pmd: explicitly disabled via build config 00:03:35.279 test-regex: explicitly disabled via build config 00:03:35.279 test-sad: explicitly disabled via build config 00:03:35.279 test-security-perf: explicitly disabled via build config 00:03:35.279 00:03:35.279 libs: 00:03:35.279 metrics: explicitly disabled via build config 00:03:35.279 acl: explicitly disabled via build config 00:03:35.279 bbdev: explicitly disabled via build config 00:03:35.279 bitratestats: explicitly disabled via build config 00:03:35.279 bpf: explicitly disabled via build config 00:03:35.279 cfgfile: explicitly disabled via build config 00:03:35.279 distributor: explicitly disabled via build config 00:03:35.279 efd: explicitly disabled via build config 00:03:35.279 eventdev: explicitly disabled via build config 00:03:35.279 dispatcher: explicitly disabled via build config 00:03:35.279 gpudev: explicitly disabled via build config 00:03:35.279 gro: explicitly disabled via build config 00:03:35.279 gso: explicitly disabled via build config 00:03:35.279 ip_frag: explicitly disabled via build config 00:03:35.279 jobstats: explicitly disabled via build config 00:03:35.279 latencystats: explicitly disabled via build config 00:03:35.279 lpm: explicitly disabled via build config 00:03:35.279 member: explicitly disabled via build config 00:03:35.279 pcapng: explicitly disabled via build config 00:03:35.279 rawdev: explicitly disabled via build config 00:03:35.279 regexdev: explicitly disabled via build config 00:03:35.279 mldev: explicitly disabled via build config 00:03:35.279 rib: explicitly disabled via build config 00:03:35.279 sched: explicitly disabled via build config 00:03:35.279 stack: explicitly disabled via build config 00:03:35.279 ipsec: explicitly disabled via build config 00:03:35.279 pdcp: explicitly disabled via build config 00:03:35.279 fib: explicitly disabled via build config 00:03:35.279 port: explicitly disabled via build config 00:03:35.279 pdump: explicitly disabled via build config 00:03:35.279 table: explicitly disabled via build config 00:03:35.279 pipeline: explicitly disabled via build config 00:03:35.279 graph: explicitly disabled via build config 00:03:35.279 node: explicitly disabled via build config 00:03:35.279 00:03:35.279 drivers: 00:03:35.279 common/cpt: not in enabled drivers build config 00:03:35.279 common/dpaax: not in enabled drivers build config 00:03:35.279 common/iavf: not in enabled drivers build config 00:03:35.279 common/idpf: not in enabled drivers build config 00:03:35.279 common/mvep: not in enabled drivers build config 00:03:35.279 common/octeontx: not in enabled drivers build config 00:03:35.279 bus/auxiliary: not in enabled drivers build config 00:03:35.279 bus/cdx: not in enabled drivers build config 00:03:35.279 bus/dpaa: not in enabled drivers build config 00:03:35.279 bus/fslmc: not in enabled drivers build config 00:03:35.279 bus/ifpga: not in enabled drivers build config 00:03:35.279 bus/platform: not in enabled drivers build config 00:03:35.279 bus/vmbus: not in enabled drivers build config 00:03:35.279 common/cnxk: not in enabled drivers build config 00:03:35.279 common/mlx5: not in enabled drivers build config 00:03:35.279 common/nfp: not in enabled drivers build config 00:03:35.279 common/qat: not in enabled drivers build config 00:03:35.279 common/sfc_efx: not in enabled drivers build config 00:03:35.279 mempool/bucket: not in enabled drivers build config 00:03:35.279 mempool/cnxk: not in enabled drivers build config 00:03:35.280 mempool/dpaa: not in enabled drivers build config 00:03:35.280 mempool/dpaa2: not in enabled drivers build config 00:03:35.280 mempool/octeontx: not in enabled drivers build config 00:03:35.280 mempool/stack: not in enabled drivers build config 00:03:35.280 dma/cnxk: not in enabled drivers build config 00:03:35.280 dma/dpaa: not in enabled drivers build config 00:03:35.280 dma/dpaa2: not in enabled drivers build config 00:03:35.280 dma/hisilicon: not in enabled drivers build config 00:03:35.280 dma/idxd: not in enabled drivers build config 00:03:35.280 dma/ioat: not in enabled drivers build config 00:03:35.280 dma/skeleton: not in enabled drivers build config 00:03:35.280 net/af_packet: not in enabled drivers build config 00:03:35.280 net/af_xdp: not in enabled drivers build config 00:03:35.280 net/ark: not in enabled drivers build config 00:03:35.280 net/atlantic: not in enabled drivers build config 00:03:35.280 net/avp: not in enabled drivers build config 00:03:35.280 net/axgbe: not in enabled drivers build config 00:03:35.280 net/bnx2x: not in enabled drivers build config 00:03:35.280 net/bnxt: not in enabled drivers build config 00:03:35.280 net/bonding: not in enabled drivers build config 00:03:35.280 net/cnxk: not in enabled drivers build config 00:03:35.280 net/cpfl: not in enabled drivers build config 00:03:35.280 net/cxgbe: not in enabled drivers build config 00:03:35.280 net/dpaa: not in enabled drivers build config 00:03:35.280 net/dpaa2: not in enabled drivers build config 00:03:35.280 net/e1000: not in enabled drivers build config 00:03:35.280 net/ena: not in enabled drivers build config 00:03:35.280 net/enetc: not in enabled drivers build config 00:03:35.280 net/enetfec: not in enabled drivers build config 00:03:35.280 net/enic: not in enabled drivers build config 00:03:35.280 net/failsafe: not in enabled drivers build config 00:03:35.280 net/fm10k: not in enabled drivers build config 00:03:35.280 net/gve: not in enabled drivers build config 00:03:35.280 net/hinic: not in enabled drivers build config 00:03:35.280 net/hns3: not in enabled drivers build config 00:03:35.280 net/i40e: not in enabled drivers build config 00:03:35.280 net/iavf: not in enabled drivers build config 00:03:35.280 net/ice: not in enabled drivers build config 00:03:35.280 net/idpf: not in enabled drivers build config 00:03:35.280 net/igc: not in enabled drivers build config 00:03:35.280 net/ionic: not in enabled drivers build config 00:03:35.280 net/ipn3ke: not in enabled drivers build config 00:03:35.280 net/ixgbe: not in enabled drivers build config 00:03:35.280 net/mana: not in enabled drivers build config 00:03:35.280 net/memif: not in enabled drivers build config 00:03:35.280 net/mlx4: not in enabled drivers build config 00:03:35.280 net/mlx5: not in enabled drivers build config 00:03:35.280 net/mvneta: not in enabled drivers build config 00:03:35.280 net/mvpp2: not in enabled drivers build config 00:03:35.280 net/netvsc: not in enabled drivers build config 00:03:35.280 net/nfb: not in enabled drivers build config 00:03:35.280 net/nfp: not in enabled drivers build config 00:03:35.280 net/ngbe: not in enabled drivers build config 00:03:35.280 net/null: not in enabled drivers build config 00:03:35.280 net/octeontx: not in enabled drivers build config 00:03:35.280 net/octeon_ep: not in enabled drivers build config 00:03:35.280 net/pcap: not in enabled drivers build config 00:03:35.280 net/pfe: not in enabled drivers build config 00:03:35.280 net/qede: not in enabled drivers build config 00:03:35.280 net/ring: not in enabled drivers build config 00:03:35.280 net/sfc: not in enabled drivers build config 00:03:35.280 net/softnic: not in enabled drivers build config 00:03:35.280 net/tap: not in enabled drivers build config 00:03:35.280 net/thunderx: not in enabled drivers build config 00:03:35.280 net/txgbe: not in enabled drivers build config 00:03:35.280 net/vdev_netvsc: not in enabled drivers build config 00:03:35.280 net/vhost: not in enabled drivers build config 00:03:35.280 net/virtio: not in enabled drivers build config 00:03:35.280 net/vmxnet3: not in enabled drivers build config 00:03:35.280 raw/*: missing internal dependency, "rawdev" 00:03:35.280 crypto/armv8: not in enabled drivers build config 00:03:35.280 crypto/bcmfs: not in enabled drivers build config 00:03:35.280 crypto/caam_jr: not in enabled drivers build config 00:03:35.280 crypto/ccp: not in enabled drivers build config 00:03:35.280 crypto/cnxk: not in enabled drivers build config 00:03:35.280 crypto/dpaa_sec: not in enabled drivers build config 00:03:35.280 crypto/dpaa2_sec: not in enabled drivers build config 00:03:35.280 crypto/ipsec_mb: not in enabled drivers build config 00:03:35.280 crypto/mlx5: not in enabled drivers build config 00:03:35.280 crypto/mvsam: not in enabled drivers build config 00:03:35.280 crypto/nitrox: not in enabled drivers build config 00:03:35.280 crypto/null: not in enabled drivers build config 00:03:35.280 crypto/octeontx: not in enabled drivers build config 00:03:35.280 crypto/openssl: not in enabled drivers build config 00:03:35.280 crypto/scheduler: not in enabled drivers build config 00:03:35.280 crypto/uadk: not in enabled drivers build config 00:03:35.280 crypto/virtio: not in enabled drivers build config 00:03:35.280 compress/isal: not in enabled drivers build config 00:03:35.280 compress/mlx5: not in enabled drivers build config 00:03:35.280 compress/octeontx: not in enabled drivers build config 00:03:35.280 compress/zlib: not in enabled drivers build config 00:03:35.280 regex/*: missing internal dependency, "regexdev" 00:03:35.280 ml/*: missing internal dependency, "mldev" 00:03:35.280 vdpa/ifc: not in enabled drivers build config 00:03:35.280 vdpa/mlx5: not in enabled drivers build config 00:03:35.280 vdpa/nfp: not in enabled drivers build config 00:03:35.280 vdpa/sfc: not in enabled drivers build config 00:03:35.280 event/*: missing internal dependency, "eventdev" 00:03:35.280 baseband/*: missing internal dependency, "bbdev" 00:03:35.280 gpu/*: missing internal dependency, "gpudev" 00:03:35.280 00:03:35.280 00:03:35.280 Build targets in project: 85 00:03:35.280 00:03:35.280 DPDK 23.11.0 00:03:35.280 00:03:35.280 User defined options 00:03:35.280 buildtype : debug 00:03:35.280 default_library : shared 00:03:35.280 libdir : lib 00:03:35.280 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:35.280 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:35.280 c_link_args : 00:03:35.280 cpu_instruction_set: native 00:03:35.280 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:03:35.280 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:03:35.280 enable_docs : false 00:03:35.280 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:35.280 enable_kmods : false 00:03:35.280 tests : false 00:03:35.280 00:03:35.280 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:35.854 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:35.854 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:35.854 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:35.854 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:35.854 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:35.854 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:35.854 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:35.854 [7/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:35.854 [8/265] Linking static target lib/librte_kvargs.a 00:03:35.854 [9/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:35.854 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:35.854 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:35.854 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:35.854 [13/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:35.854 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:35.854 [15/265] Linking static target lib/librte_log.a 00:03:35.854 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:36.804 [17/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.804 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:36.804 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:36.804 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:36.804 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:36.804 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:36.804 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:36.804 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:36.804 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:36.804 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:36.804 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:36.804 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:36.804 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:36.804 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:36.804 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:36.804 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:36.804 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:36.804 [34/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:36.804 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:36.804 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:37.066 [37/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:37.066 [38/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:37.066 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:37.066 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:37.066 [41/265] Linking static target lib/librte_telemetry.a 00:03:37.066 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:37.066 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:37.066 [44/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:37.066 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:37.066 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:37.066 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:37.066 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:37.066 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:37.066 [50/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:37.066 [51/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:37.066 [52/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:37.066 [53/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:37.066 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:37.066 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:37.066 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:37.324 [57/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.324 [58/265] Linking target lib/librte_log.so.24.0 00:03:37.324 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:37.584 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:37.845 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:37.845 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:37.845 [63/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:37.845 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:37.845 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:37.845 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:37.845 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:37.845 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:37.845 [69/265] Linking target lib/librte_kvargs.so.24.0 00:03:37.845 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:38.116 [71/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.116 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:38.116 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:38.116 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:38.116 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:38.116 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:38.116 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:38.116 [78/265] Linking target lib/librte_telemetry.so.24.0 00:03:38.116 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:38.116 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:38.116 [81/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:38.116 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:38.380 [83/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:38.380 [84/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:38.380 [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:38.380 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:38.380 [87/265] Linking static target lib/librte_ring.a 00:03:38.380 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:38.380 [89/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:38.380 [90/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:38.380 [91/265] Linking static target lib/librte_rcu.a 00:03:38.380 [92/265] Linking static target lib/librte_eal.a 00:03:38.380 [93/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:38.380 [94/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:38.380 [95/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:38.380 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:38.380 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:38.380 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:38.380 [99/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:38.380 [100/265] Linking static target lib/librte_mempool.a 00:03:38.380 [101/265] Linking static target lib/librte_pci.a 00:03:38.641 [102/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:38.641 [103/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:38.641 [104/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:38.641 [105/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:38.641 [106/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:38.906 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:38.906 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:38.906 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:38.906 [110/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:38.906 [111/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:38.906 [112/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:38.906 [113/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:38.906 [114/265] Linking static target lib/librte_meter.a 00:03:38.906 [115/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:38.906 [116/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:38.906 [117/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.906 [118/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.906 [119/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:39.166 [120/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.166 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:39.166 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:39.424 [123/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:39.424 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:39.424 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:39.424 [126/265] Linking static target lib/librte_net.a 00:03:39.424 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:39.424 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:39.424 [129/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.424 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:39.424 [131/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:39.424 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:39.688 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:39.688 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:39.688 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:39.688 [136/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:39.688 [137/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:39.688 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:39.688 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:39.688 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:39.688 [141/265] Linking static target lib/librte_cmdline.a 00:03:39.952 [142/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.952 [143/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:39.952 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:39.952 [145/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:39.952 [146/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:40.212 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:40.212 [148/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.212 [149/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:40.212 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:40.212 [151/265] Linking static target lib/librte_timer.a 00:03:40.212 [152/265] Linking static target lib/librte_mbuf.a 00:03:40.212 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:40.212 [154/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:40.212 [155/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:40.212 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:40.472 [157/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:40.472 [158/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:40.472 [159/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:40.472 [160/265] Linking static target lib/librte_dmadev.a 00:03:40.472 [161/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:40.734 [162/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:40.734 [163/265] Linking static target lib/librte_hash.a 00:03:40.734 [164/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:40.734 [165/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:40.734 [166/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:40.734 [167/265] Linking static target lib/librte_compressdev.a 00:03:40.734 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:40.734 [169/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:40.734 [170/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:40.997 [171/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:40.997 [172/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.997 [173/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:40.997 [174/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:40.997 [175/265] Linking static target lib/librte_power.a 00:03:41.259 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:41.259 [177/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:41.259 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:41.259 [179/265] Linking static target lib/librte_reorder.a 00:03:41.259 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:41.259 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:41.259 [182/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.259 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:41.259 [184/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.517 [185/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:41.517 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:41.517 [187/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.517 [188/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.517 [189/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.517 [190/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:41.517 [191/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:41.517 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.517 [193/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:41.517 [194/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:41.517 [195/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:41.517 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:41.517 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:41.775 [198/265] Linking static target lib/librte_ethdev.a 00:03:41.775 [199/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:41.775 [200/265] Linking static target lib/librte_security.a 00:03:41.775 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:41.775 [202/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:41.775 [203/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.775 [204/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:41.775 [205/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:41.776 [206/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:41.776 [207/265] Linking static target drivers/librte_bus_vdev.a 00:03:41.776 [208/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:42.034 [209/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:42.034 [210/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:42.034 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:42.034 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.034 [213/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.034 [214/265] Linking static target drivers/librte_bus_pci.a 00:03:42.034 [215/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:42.034 [216/265] Linking static target lib/librte_cryptodev.a 00:03:42.034 [217/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.034 [218/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.034 [219/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:42.292 [220/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:42.292 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:42.292 [222/265] Linking static target drivers/librte_mempool_ring.a 00:03:42.549 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.116 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.489 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:45.863 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.121 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.121 [228/265] Linking target lib/librte_eal.so.24.0 00:03:46.380 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:46.380 [230/265] Linking target lib/librte_ring.so.24.0 00:03:46.380 [231/265] Linking target lib/librte_timer.so.24.0 00:03:46.380 [232/265] Linking target lib/librte_pci.so.24.0 00:03:46.380 [233/265] Linking target lib/librte_meter.so.24.0 00:03:46.380 [234/265] Linking target lib/librte_dmadev.so.24.0 00:03:46.380 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:46.380 [236/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:46.380 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:46.380 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:46.380 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:46.637 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:46.637 [241/265] Linking target lib/librte_rcu.so.24.0 00:03:46.637 [242/265] Linking target lib/librte_mempool.so.24.0 00:03:46.637 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:46.637 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:46.637 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:46.637 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:46.637 [247/265] Linking target lib/librte_mbuf.so.24.0 00:03:46.896 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:46.896 [249/265] Linking target lib/librte_reorder.so.24.0 00:03:46.896 [250/265] Linking target lib/librte_compressdev.so.24.0 00:03:46.896 [251/265] Linking target lib/librte_net.so.24.0 00:03:46.896 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:03:47.154 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:47.154 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:47.154 [255/265] Linking target lib/librte_hash.so.24.0 00:03:47.154 [256/265] Linking target lib/librte_cmdline.so.24.0 00:03:47.154 [257/265] Linking target lib/librte_security.so.24.0 00:03:47.154 [258/265] Linking target lib/librte_ethdev.so.24.0 00:03:47.154 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:47.154 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:47.412 [261/265] Linking target lib/librte_power.so.24.0 00:03:51.666 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:51.666 [263/265] Linking static target lib/librte_vhost.a 00:03:52.234 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.234 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:52.234 INFO: autodetecting backend as ninja 00:03:52.234 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 32 00:03:53.608 CC lib/log/log.o 00:03:53.608 CC lib/log/log_flags.o 00:03:53.608 CC lib/log/log_deprecated.o 00:03:53.608 CC lib/ut/ut.o 00:03:53.608 CC lib/ut_mock/mock.o 00:03:53.608 LIB libspdk_ut_mock.a 00:03:53.608 SO libspdk_ut_mock.so.6.0 00:03:53.608 LIB libspdk_ut.a 00:03:53.608 LIB libspdk_log.a 00:03:53.608 SO libspdk_ut.so.2.0 00:03:53.608 SO libspdk_log.so.7.0 00:03:53.608 SYMLINK libspdk_ut_mock.so 00:03:53.608 SYMLINK libspdk_ut.so 00:03:53.608 SYMLINK libspdk_log.so 00:03:53.870 CXX lib/trace_parser/trace.o 00:03:53.870 CC lib/dma/dma.o 00:03:53.870 CC lib/ioat/ioat.o 00:03:53.870 CC lib/util/base64.o 00:03:53.870 CC lib/util/bit_array.o 00:03:53.870 CC lib/util/cpuset.o 00:03:53.870 CC lib/util/crc16.o 00:03:53.870 CC lib/util/crc32.o 00:03:53.870 CC lib/util/crc32c.o 00:03:53.870 CC lib/util/crc32_ieee.o 00:03:53.870 CC lib/util/crc64.o 00:03:53.870 CC lib/util/dif.o 00:03:53.870 CC lib/util/fd.o 00:03:53.870 CC lib/util/file.o 00:03:53.870 CC lib/util/hexlify.o 00:03:53.870 CC lib/util/math.o 00:03:53.870 CC lib/util/iov.o 00:03:53.870 CC lib/util/pipe.o 00:03:53.870 CC lib/util/strerror_tls.o 00:03:53.870 CC lib/util/string.o 00:03:53.870 CC lib/util/uuid.o 00:03:53.870 CC lib/util/fd_group.o 00:03:53.870 CC lib/util/xor.o 00:03:53.870 CC lib/util/zipf.o 00:03:53.870 CC lib/vfio_user/host/vfio_user_pci.o 00:03:53.870 CC lib/vfio_user/host/vfio_user.o 00:03:54.129 LIB libspdk_dma.a 00:03:54.129 SO libspdk_dma.so.4.0 00:03:54.129 SYMLINK libspdk_dma.so 00:03:54.129 LIB libspdk_ioat.a 00:03:54.386 SO libspdk_ioat.so.7.0 00:03:54.386 LIB libspdk_vfio_user.a 00:03:54.386 SYMLINK libspdk_ioat.so 00:03:54.386 SO libspdk_vfio_user.so.5.0 00:03:54.386 SYMLINK libspdk_vfio_user.so 00:03:54.644 LIB libspdk_util.a 00:03:54.644 SO libspdk_util.so.9.0 00:03:54.644 SYMLINK libspdk_util.so 00:03:54.903 CC lib/conf/conf.o 00:03:54.903 CC lib/idxd/idxd.o 00:03:54.903 CC lib/idxd/idxd_user.o 00:03:54.903 CC lib/vmd/vmd.o 00:03:54.903 CC lib/vmd/led.o 00:03:54.903 CC lib/json/json_parse.o 00:03:54.903 CC lib/json/json_util.o 00:03:54.903 CC lib/json/json_write.o 00:03:54.903 CC lib/rdma/common.o 00:03:54.903 CC lib/rdma/rdma_verbs.o 00:03:54.903 CC lib/env_dpdk/env.o 00:03:54.903 CC lib/env_dpdk/memory.o 00:03:54.903 CC lib/env_dpdk/pci.o 00:03:54.903 CC lib/env_dpdk/init.o 00:03:54.903 CC lib/env_dpdk/threads.o 00:03:54.903 CC lib/env_dpdk/pci_virtio.o 00:03:54.903 CC lib/env_dpdk/pci_ioat.o 00:03:54.903 CC lib/env_dpdk/pci_vmd.o 00:03:54.903 CC lib/env_dpdk/pci_idxd.o 00:03:54.903 CC lib/env_dpdk/pci_event.o 00:03:54.903 CC lib/env_dpdk/sigbus_handler.o 00:03:54.903 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:54.903 CC lib/env_dpdk/pci_dpdk.o 00:03:54.903 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:55.162 LIB libspdk_trace_parser.a 00:03:55.162 SO libspdk_trace_parser.so.5.0 00:03:55.162 SYMLINK libspdk_trace_parser.so 00:03:55.162 LIB libspdk_conf.a 00:03:55.419 SO libspdk_conf.so.6.0 00:03:55.419 SYMLINK libspdk_conf.so 00:03:55.420 LIB libspdk_json.a 00:03:55.420 SO libspdk_json.so.6.0 00:03:55.420 LIB libspdk_rdma.a 00:03:55.420 SO libspdk_rdma.so.6.0 00:03:55.420 SYMLINK libspdk_json.so 00:03:55.420 SYMLINK libspdk_rdma.so 00:03:55.677 LIB libspdk_idxd.a 00:03:55.677 SO libspdk_idxd.so.12.0 00:03:55.677 CC lib/jsonrpc/jsonrpc_server.o 00:03:55.677 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:55.677 CC lib/jsonrpc/jsonrpc_client.o 00:03:55.677 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:55.677 SYMLINK libspdk_idxd.so 00:03:55.677 LIB libspdk_vmd.a 00:03:55.677 SO libspdk_vmd.so.6.0 00:03:55.677 SYMLINK libspdk_vmd.so 00:03:55.936 LIB libspdk_jsonrpc.a 00:03:55.936 SO libspdk_jsonrpc.so.6.0 00:03:55.936 SYMLINK libspdk_jsonrpc.so 00:03:56.194 CC lib/rpc/rpc.o 00:03:56.452 LIB libspdk_rpc.a 00:03:56.452 SO libspdk_rpc.so.6.0 00:03:56.452 SYMLINK libspdk_rpc.so 00:03:56.710 CC lib/notify/notify.o 00:03:56.710 CC lib/notify/notify_rpc.o 00:03:56.710 CC lib/trace/trace.o 00:03:56.710 CC lib/trace/trace_flags.o 00:03:56.710 CC lib/trace/trace_rpc.o 00:03:56.710 CC lib/keyring/keyring.o 00:03:56.710 CC lib/keyring/keyring_rpc.o 00:03:56.967 LIB libspdk_notify.a 00:03:56.967 SO libspdk_notify.so.6.0 00:03:56.967 SYMLINK libspdk_notify.so 00:03:56.967 LIB libspdk_keyring.a 00:03:56.967 LIB libspdk_trace.a 00:03:56.967 SO libspdk_keyring.so.1.0 00:03:56.967 LIB libspdk_env_dpdk.a 00:03:56.967 SO libspdk_trace.so.10.0 00:03:56.967 SYMLINK libspdk_keyring.so 00:03:56.967 SYMLINK libspdk_trace.so 00:03:56.967 SO libspdk_env_dpdk.so.14.0 00:03:57.225 CC lib/thread/thread.o 00:03:57.225 CC lib/thread/iobuf.o 00:03:57.225 CC lib/sock/sock.o 00:03:57.225 CC lib/sock/sock_rpc.o 00:03:57.225 SYMLINK libspdk_env_dpdk.so 00:03:57.790 LIB libspdk_sock.a 00:03:57.790 SO libspdk_sock.so.9.0 00:03:57.790 SYMLINK libspdk_sock.so 00:03:58.048 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:58.049 CC lib/nvme/nvme_ctrlr.o 00:03:58.049 CC lib/nvme/nvme_fabric.o 00:03:58.049 CC lib/nvme/nvme_ns_cmd.o 00:03:58.049 CC lib/nvme/nvme_ns.o 00:03:58.049 CC lib/nvme/nvme_pcie_common.o 00:03:58.049 CC lib/nvme/nvme_pcie.o 00:03:58.049 CC lib/nvme/nvme_qpair.o 00:03:58.049 CC lib/nvme/nvme.o 00:03:58.049 CC lib/nvme/nvme_quirks.o 00:03:58.049 CC lib/nvme/nvme_transport.o 00:03:58.049 CC lib/nvme/nvme_discovery.o 00:03:58.049 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:58.049 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:58.049 CC lib/nvme/nvme_tcp.o 00:03:58.049 CC lib/nvme/nvme_opal.o 00:03:58.049 CC lib/nvme/nvme_poll_group.o 00:03:58.049 CC lib/nvme/nvme_io_msg.o 00:03:58.049 CC lib/nvme/nvme_zns.o 00:03:58.049 CC lib/nvme/nvme_stubs.o 00:03:58.049 CC lib/nvme/nvme_auth.o 00:03:58.049 CC lib/nvme/nvme_cuse.o 00:03:58.049 CC lib/nvme/nvme_vfio_user.o 00:03:58.049 CC lib/nvme/nvme_rdma.o 00:03:59.424 LIB libspdk_thread.a 00:03:59.424 SO libspdk_thread.so.10.0 00:03:59.424 SYMLINK libspdk_thread.so 00:03:59.424 CC lib/virtio/virtio.o 00:03:59.424 CC lib/virtio/virtio_vhost_user.o 00:03:59.424 CC lib/virtio/virtio_vfio_user.o 00:03:59.424 CC lib/init/json_config.o 00:03:59.424 CC lib/virtio/virtio_pci.o 00:03:59.424 CC lib/init/subsystem.o 00:03:59.424 CC lib/init/subsystem_rpc.o 00:03:59.424 CC lib/init/rpc.o 00:03:59.424 CC lib/vfu_tgt/tgt_endpoint.o 00:03:59.424 CC lib/blob/blobstore.o 00:03:59.424 CC lib/vfu_tgt/tgt_rpc.o 00:03:59.424 CC lib/blob/request.o 00:03:59.424 CC lib/blob/zeroes.o 00:03:59.424 CC lib/blob/blob_bs_dev.o 00:03:59.424 CC lib/accel/accel.o 00:03:59.424 CC lib/accel/accel_rpc.o 00:03:59.424 CC lib/accel/accel_sw.o 00:03:59.681 LIB libspdk_init.a 00:03:59.682 SO libspdk_init.so.5.0 00:03:59.682 SYMLINK libspdk_init.so 00:03:59.939 LIB libspdk_vfu_tgt.a 00:03:59.939 SO libspdk_vfu_tgt.so.3.0 00:03:59.939 LIB libspdk_virtio.a 00:03:59.939 SO libspdk_virtio.so.7.0 00:03:59.939 CC lib/event/app.o 00:03:59.939 CC lib/event/reactor.o 00:03:59.939 CC lib/event/log_rpc.o 00:03:59.939 CC lib/event/app_rpc.o 00:03:59.939 CC lib/event/scheduler_static.o 00:03:59.939 SYMLINK libspdk_vfu_tgt.so 00:03:59.939 SYMLINK libspdk_virtio.so 00:04:00.505 LIB libspdk_nvme.a 00:04:00.505 LIB libspdk_event.a 00:04:00.505 SO libspdk_event.so.13.0 00:04:00.505 SO libspdk_nvme.so.13.0 00:04:00.505 SYMLINK libspdk_event.so 00:04:00.763 LIB libspdk_accel.a 00:04:00.763 SO libspdk_accel.so.15.0 00:04:00.763 SYMLINK libspdk_accel.so 00:04:00.763 SYMLINK libspdk_nvme.so 00:04:01.021 CC lib/bdev/bdev.o 00:04:01.021 CC lib/bdev/bdev_rpc.o 00:04:01.021 CC lib/bdev/bdev_zone.o 00:04:01.021 CC lib/bdev/part.o 00:04:01.021 CC lib/bdev/scsi_nvme.o 00:04:02.919 LIB libspdk_blob.a 00:04:02.919 SO libspdk_blob.so.11.0 00:04:02.919 SYMLINK libspdk_blob.so 00:04:03.176 CC lib/blobfs/blobfs.o 00:04:03.176 CC lib/blobfs/tree.o 00:04:03.177 CC lib/lvol/lvol.o 00:04:03.435 LIB libspdk_bdev.a 00:04:03.435 SO libspdk_bdev.so.15.0 00:04:03.435 SYMLINK libspdk_bdev.so 00:04:03.703 CC lib/scsi/dev.o 00:04:03.703 CC lib/nbd/nbd.o 00:04:03.703 CC lib/scsi/lun.o 00:04:03.703 CC lib/nbd/nbd_rpc.o 00:04:03.703 CC lib/scsi/port.o 00:04:03.703 CC lib/scsi/scsi.o 00:04:03.703 CC lib/scsi/scsi_bdev.o 00:04:03.703 CC lib/ublk/ublk.o 00:04:03.703 CC lib/ublk/ublk_rpc.o 00:04:03.703 CC lib/scsi/scsi_pr.o 00:04:03.703 CC lib/scsi/scsi_rpc.o 00:04:03.703 CC lib/scsi/task.o 00:04:03.703 CC lib/nvmf/ctrlr.o 00:04:03.703 CC lib/nvmf/ctrlr_discovery.o 00:04:03.703 CC lib/ftl/ftl_core.o 00:04:03.703 CC lib/nvmf/ctrlr_bdev.o 00:04:03.703 CC lib/ftl/ftl_init.o 00:04:03.703 CC lib/nvmf/subsystem.o 00:04:03.703 CC lib/ftl/ftl_layout.o 00:04:03.703 CC lib/nvmf/nvmf.o 00:04:03.703 CC lib/ftl/ftl_debug.o 00:04:03.703 CC lib/ftl/ftl_io.o 00:04:03.703 CC lib/nvmf/nvmf_rpc.o 00:04:03.703 CC lib/ftl/ftl_sb.o 00:04:03.703 CC lib/nvmf/transport.o 00:04:03.703 CC lib/ftl/ftl_l2p.o 00:04:03.703 CC lib/nvmf/tcp.o 00:04:03.703 CC lib/nvmf/stubs.o 00:04:03.703 CC lib/nvmf/vfio_user.o 00:04:03.703 CC lib/ftl/ftl_l2p_flat.o 00:04:03.962 CC lib/ftl/ftl_nv_cache.o 00:04:03.962 CC lib/nvmf/rdma.o 00:04:03.962 CC lib/ftl/ftl_band.o 00:04:03.962 CC lib/nvmf/auth.o 00:04:03.962 CC lib/ftl/ftl_band_ops.o 00:04:04.223 CC lib/ftl/ftl_writer.o 00:04:04.223 CC lib/ftl/ftl_rq.o 00:04:04.223 CC lib/ftl/ftl_reloc.o 00:04:04.223 CC lib/ftl/ftl_l2p_cache.o 00:04:04.223 CC lib/ftl/ftl_p2l.o 00:04:04.223 CC lib/ftl/mngt/ftl_mngt.o 00:04:04.223 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:04.223 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:04.223 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:04.223 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:04.223 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:04.487 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:04.487 LIB libspdk_nbd.a 00:04:04.487 SO libspdk_nbd.so.7.0 00:04:04.487 LIB libspdk_lvol.a 00:04:04.487 LIB libspdk_blobfs.a 00:04:04.487 SO libspdk_lvol.so.10.0 00:04:04.487 SO libspdk_blobfs.so.10.0 00:04:04.487 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:04.487 SYMLINK libspdk_nbd.so 00:04:04.487 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:04.487 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:04.487 LIB libspdk_scsi.a 00:04:04.487 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:04.487 SYMLINK libspdk_lvol.so 00:04:04.487 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:04.487 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:04.748 SO libspdk_scsi.so.9.0 00:04:04.748 SYMLINK libspdk_blobfs.so 00:04:04.748 CC lib/ftl/utils/ftl_conf.o 00:04:04.748 CC lib/ftl/utils/ftl_md.o 00:04:04.748 CC lib/ftl/utils/ftl_mempool.o 00:04:04.748 CC lib/ftl/utils/ftl_bitmap.o 00:04:04.748 CC lib/ftl/utils/ftl_property.o 00:04:04.748 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:04.748 SYMLINK libspdk_scsi.so 00:04:04.748 LIB libspdk_ublk.a 00:04:04.748 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:04.748 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:04.748 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:04.748 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:04.748 SO libspdk_ublk.so.3.0 00:04:05.008 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:05.008 SYMLINK libspdk_ublk.so 00:04:05.008 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:05.008 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:05.008 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:05.008 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:05.008 CC lib/ftl/base/ftl_base_dev.o 00:04:05.008 CC lib/ftl/base/ftl_base_bdev.o 00:04:05.008 CC lib/ftl/ftl_trace.o 00:04:05.008 CC lib/iscsi/conn.o 00:04:05.008 CC lib/iscsi/init_grp.o 00:04:05.008 CC lib/iscsi/iscsi.o 00:04:05.008 CC lib/iscsi/md5.o 00:04:05.008 CC lib/iscsi/param.o 00:04:05.267 CC lib/iscsi/portal_grp.o 00:04:05.267 CC lib/vhost/vhost.o 00:04:05.267 CC lib/vhost/vhost_rpc.o 00:04:05.267 CC lib/vhost/vhost_scsi.o 00:04:05.267 CC lib/iscsi/tgt_node.o 00:04:05.267 CC lib/vhost/vhost_blk.o 00:04:05.267 CC lib/iscsi/iscsi_subsystem.o 00:04:05.267 CC lib/vhost/rte_vhost_user.o 00:04:05.267 CC lib/iscsi/iscsi_rpc.o 00:04:05.267 CC lib/iscsi/task.o 00:04:05.833 LIB libspdk_ftl.a 00:04:05.833 SO libspdk_ftl.so.9.0 00:04:06.399 SYMLINK libspdk_ftl.so 00:04:06.658 LIB libspdk_vhost.a 00:04:06.658 SO libspdk_vhost.so.8.0 00:04:06.658 LIB libspdk_iscsi.a 00:04:06.658 SO libspdk_iscsi.so.8.0 00:04:06.658 LIB libspdk_nvmf.a 00:04:06.658 SYMLINK libspdk_vhost.so 00:04:06.915 SO libspdk_nvmf.so.18.0 00:04:06.915 SYMLINK libspdk_iscsi.so 00:04:06.915 SYMLINK libspdk_nvmf.so 00:04:07.173 CC module/vfu_device/vfu_virtio.o 00:04:07.173 CC module/vfu_device/vfu_virtio_blk.o 00:04:07.173 CC module/vfu_device/vfu_virtio_scsi.o 00:04:07.173 CC module/vfu_device/vfu_virtio_rpc.o 00:04:07.173 CC module/env_dpdk/env_dpdk_rpc.o 00:04:07.431 CC module/blob/bdev/blob_bdev.o 00:04:07.431 CC module/accel/dsa/accel_dsa.o 00:04:07.431 CC module/accel/error/accel_error.o 00:04:07.431 CC module/accel/dsa/accel_dsa_rpc.o 00:04:07.431 CC module/accel/error/accel_error_rpc.o 00:04:07.431 CC module/accel/ioat/accel_ioat.o 00:04:07.431 CC module/accel/ioat/accel_ioat_rpc.o 00:04:07.431 CC module/scheduler/gscheduler/gscheduler.o 00:04:07.431 CC module/keyring/file/keyring.o 00:04:07.431 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:07.431 CC module/accel/iaa/accel_iaa.o 00:04:07.431 CC module/accel/iaa/accel_iaa_rpc.o 00:04:07.431 CC module/keyring/file/keyring_rpc.o 00:04:07.431 CC module/sock/posix/posix.o 00:04:07.431 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:07.431 LIB libspdk_env_dpdk_rpc.a 00:04:07.431 LIB libspdk_scheduler_dpdk_governor.a 00:04:07.431 SO libspdk_env_dpdk_rpc.so.6.0 00:04:07.431 LIB libspdk_keyring_file.a 00:04:07.690 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:07.690 SO libspdk_keyring_file.so.1.0 00:04:07.690 SYMLINK libspdk_env_dpdk_rpc.so 00:04:07.690 LIB libspdk_accel_error.a 00:04:07.690 LIB libspdk_scheduler_gscheduler.a 00:04:07.690 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:07.690 SO libspdk_accel_error.so.2.0 00:04:07.690 SO libspdk_scheduler_gscheduler.so.4.0 00:04:07.690 SYMLINK libspdk_keyring_file.so 00:04:07.690 LIB libspdk_blob_bdev.a 00:04:07.690 LIB libspdk_accel_dsa.a 00:04:07.690 LIB libspdk_accel_ioat.a 00:04:07.690 SO libspdk_blob_bdev.so.11.0 00:04:07.690 LIB libspdk_accel_iaa.a 00:04:07.690 SYMLINK libspdk_scheduler_gscheduler.so 00:04:07.690 SO libspdk_accel_ioat.so.6.0 00:04:07.690 SYMLINK libspdk_accel_error.so 00:04:07.690 SO libspdk_accel_dsa.so.5.0 00:04:07.690 SO libspdk_accel_iaa.so.3.0 00:04:07.690 LIB libspdk_scheduler_dynamic.a 00:04:07.690 SYMLINK libspdk_blob_bdev.so 00:04:07.690 SYMLINK libspdk_accel_ioat.so 00:04:07.690 SO libspdk_scheduler_dynamic.so.4.0 00:04:07.690 SYMLINK libspdk_accel_dsa.so 00:04:07.690 SYMLINK libspdk_accel_iaa.so 00:04:07.690 SYMLINK libspdk_scheduler_dynamic.so 00:04:07.955 LIB libspdk_vfu_device.a 00:04:07.955 CC module/blobfs/bdev/blobfs_bdev.o 00:04:07.955 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:07.955 CC module/bdev/gpt/gpt.o 00:04:07.955 CC module/bdev/ftl/bdev_ftl.o 00:04:07.955 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:07.955 CC module/bdev/gpt/vbdev_gpt.o 00:04:07.955 CC module/bdev/aio/bdev_aio.o 00:04:07.955 CC module/bdev/delay/vbdev_delay.o 00:04:07.955 CC module/bdev/aio/bdev_aio_rpc.o 00:04:07.955 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:07.955 CC module/bdev/raid/bdev_raid.o 00:04:07.955 CC module/bdev/raid/bdev_raid_rpc.o 00:04:07.955 CC module/bdev/raid/bdev_raid_sb.o 00:04:07.955 CC module/bdev/split/vbdev_split.o 00:04:07.955 CC module/bdev/raid/raid0.o 00:04:07.955 CC module/bdev/split/vbdev_split_rpc.o 00:04:07.955 CC module/bdev/raid/raid1.o 00:04:07.955 CC module/bdev/raid/concat.o 00:04:07.955 SO libspdk_vfu_device.so.3.0 00:04:07.955 CC module/bdev/lvol/vbdev_lvol.o 00:04:07.955 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:07.955 CC module/bdev/iscsi/bdev_iscsi.o 00:04:07.955 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:07.955 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:07.955 CC module/bdev/null/bdev_null.o 00:04:07.955 CC module/bdev/null/bdev_null_rpc.o 00:04:07.955 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:07.955 CC module/bdev/passthru/vbdev_passthru.o 00:04:07.955 CC module/bdev/nvme/bdev_nvme.o 00:04:07.955 CC module/bdev/malloc/bdev_malloc.o 00:04:07.955 CC module/bdev/error/vbdev_error.o 00:04:08.218 SYMLINK libspdk_vfu_device.so 00:04:08.218 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:08.477 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:08.477 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:08.477 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:08.477 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.477 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:08.477 LIB libspdk_blobfs_bdev.a 00:04:08.477 CC module/bdev/error/vbdev_error_rpc.o 00:04:08.477 CC module/bdev/nvme/nvme_rpc.o 00:04:08.477 CC module/bdev/nvme/bdev_mdns_client.o 00:04:08.477 LIB libspdk_sock_posix.a 00:04:08.477 SO libspdk_blobfs_bdev.so.6.0 00:04:08.477 CC module/bdev/nvme/vbdev_opal.o 00:04:08.477 LIB libspdk_bdev_split.a 00:04:08.477 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:08.477 SO libspdk_sock_posix.so.6.0 00:04:08.477 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:08.477 SO libspdk_bdev_split.so.6.0 00:04:08.477 LIB libspdk_bdev_gpt.a 00:04:08.477 LIB libspdk_bdev_null.a 00:04:08.477 SYMLINK libspdk_blobfs_bdev.so 00:04:08.477 SO libspdk_bdev_null.so.6.0 00:04:08.477 SO libspdk_bdev_gpt.so.6.0 00:04:08.734 LIB libspdk_bdev_ftl.a 00:04:08.734 SYMLINK libspdk_bdev_split.so 00:04:08.734 SO libspdk_bdev_ftl.so.6.0 00:04:08.734 SYMLINK libspdk_sock_posix.so 00:04:08.734 SYMLINK libspdk_bdev_gpt.so 00:04:08.734 LIB libspdk_bdev_aio.a 00:04:08.734 LIB libspdk_bdev_passthru.a 00:04:08.734 SYMLINK libspdk_bdev_null.so 00:04:08.734 LIB libspdk_bdev_malloc.a 00:04:08.734 LIB libspdk_bdev_iscsi.a 00:04:08.734 SO libspdk_bdev_aio.so.6.0 00:04:08.734 SO libspdk_bdev_malloc.so.6.0 00:04:08.734 SYMLINK libspdk_bdev_ftl.so 00:04:08.734 LIB libspdk_bdev_zone_block.a 00:04:08.734 SO libspdk_bdev_passthru.so.6.0 00:04:08.734 SO libspdk_bdev_iscsi.so.6.0 00:04:08.734 LIB libspdk_bdev_error.a 00:04:08.734 LIB libspdk_bdev_delay.a 00:04:08.734 SO libspdk_bdev_zone_block.so.6.0 00:04:08.734 SO libspdk_bdev_error.so.6.0 00:04:08.734 SO libspdk_bdev_delay.so.6.0 00:04:08.734 SYMLINK libspdk_bdev_aio.so 00:04:08.734 SYMLINK libspdk_bdev_passthru.so 00:04:08.734 SYMLINK libspdk_bdev_malloc.so 00:04:08.734 SYMLINK libspdk_bdev_iscsi.so 00:04:08.734 SYMLINK libspdk_bdev_zone_block.so 00:04:08.734 SYMLINK libspdk_bdev_error.so 00:04:08.992 SYMLINK libspdk_bdev_delay.so 00:04:08.992 LIB libspdk_bdev_lvol.a 00:04:08.992 LIB libspdk_bdev_virtio.a 00:04:08.992 SO libspdk_bdev_lvol.so.6.0 00:04:08.992 SO libspdk_bdev_virtio.so.6.0 00:04:08.992 SYMLINK libspdk_bdev_lvol.so 00:04:08.992 SYMLINK libspdk_bdev_virtio.so 00:04:09.249 LIB libspdk_bdev_raid.a 00:04:09.249 SO libspdk_bdev_raid.so.6.0 00:04:09.507 SYMLINK libspdk_bdev_raid.so 00:04:10.443 LIB libspdk_bdev_nvme.a 00:04:10.443 SO libspdk_bdev_nvme.so.7.0 00:04:10.701 SYMLINK libspdk_bdev_nvme.so 00:04:10.959 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:10.959 CC module/event/subsystems/sock/sock.o 00:04:10.959 CC module/event/subsystems/keyring/keyring.o 00:04:10.959 CC module/event/subsystems/iobuf/iobuf.o 00:04:10.959 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:10.959 CC module/event/subsystems/vmd/vmd.o 00:04:10.959 CC module/event/subsystems/scheduler/scheduler.o 00:04:10.959 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:10.959 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:11.217 LIB libspdk_event_keyring.a 00:04:11.217 LIB libspdk_event_sock.a 00:04:11.217 LIB libspdk_event_vhost_blk.a 00:04:11.217 LIB libspdk_event_scheduler.a 00:04:11.217 LIB libspdk_event_vfu_tgt.a 00:04:11.217 LIB libspdk_event_vmd.a 00:04:11.217 SO libspdk_event_keyring.so.1.0 00:04:11.217 SO libspdk_event_sock.so.5.0 00:04:11.217 LIB libspdk_event_iobuf.a 00:04:11.217 SO libspdk_event_vhost_blk.so.3.0 00:04:11.217 SO libspdk_event_scheduler.so.4.0 00:04:11.217 SO libspdk_event_vfu_tgt.so.3.0 00:04:11.217 SO libspdk_event_vmd.so.6.0 00:04:11.217 SO libspdk_event_iobuf.so.3.0 00:04:11.217 SYMLINK libspdk_event_sock.so 00:04:11.217 SYMLINK libspdk_event_keyring.so 00:04:11.217 SYMLINK libspdk_event_vhost_blk.so 00:04:11.217 SYMLINK libspdk_event_vfu_tgt.so 00:04:11.217 SYMLINK libspdk_event_scheduler.so 00:04:11.217 SYMLINK libspdk_event_vmd.so 00:04:11.217 SYMLINK libspdk_event_iobuf.so 00:04:11.477 CC module/event/subsystems/accel/accel.o 00:04:11.736 LIB libspdk_event_accel.a 00:04:11.736 SO libspdk_event_accel.so.6.0 00:04:11.736 SYMLINK libspdk_event_accel.so 00:04:11.993 CC module/event/subsystems/bdev/bdev.o 00:04:11.993 LIB libspdk_event_bdev.a 00:04:12.250 SO libspdk_event_bdev.so.6.0 00:04:12.250 SYMLINK libspdk_event_bdev.so 00:04:12.507 CC module/event/subsystems/ublk/ublk.o 00:04:12.507 CC module/event/subsystems/nbd/nbd.o 00:04:12.507 CC module/event/subsystems/scsi/scsi.o 00:04:12.507 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:12.507 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:12.507 LIB libspdk_event_nbd.a 00:04:12.507 LIB libspdk_event_ublk.a 00:04:12.507 LIB libspdk_event_scsi.a 00:04:12.507 SO libspdk_event_ublk.so.3.0 00:04:12.507 SO libspdk_event_nbd.so.6.0 00:04:12.507 SO libspdk_event_scsi.so.6.0 00:04:12.507 SYMLINK libspdk_event_ublk.so 00:04:12.507 SYMLINK libspdk_event_nbd.so 00:04:12.765 SYMLINK libspdk_event_scsi.so 00:04:12.765 LIB libspdk_event_nvmf.a 00:04:12.765 SO libspdk_event_nvmf.so.6.0 00:04:12.765 SYMLINK libspdk_event_nvmf.so 00:04:12.765 CC module/event/subsystems/iscsi/iscsi.o 00:04:12.765 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:13.023 LIB libspdk_event_vhost_scsi.a 00:04:13.023 SO libspdk_event_vhost_scsi.so.3.0 00:04:13.023 LIB libspdk_event_iscsi.a 00:04:13.023 SO libspdk_event_iscsi.so.6.0 00:04:13.023 SYMLINK libspdk_event_vhost_scsi.so 00:04:13.023 SYMLINK libspdk_event_iscsi.so 00:04:13.291 SO libspdk.so.6.0 00:04:13.291 SYMLINK libspdk.so 00:04:13.553 CC test/rpc_client/rpc_client_test.o 00:04:13.553 TEST_HEADER include/spdk/accel.h 00:04:13.553 TEST_HEADER include/spdk/accel_module.h 00:04:13.553 CXX app/trace/trace.o 00:04:13.553 TEST_HEADER include/spdk/assert.h 00:04:13.553 TEST_HEADER include/spdk/barrier.h 00:04:13.553 CC app/trace_record/trace_record.o 00:04:13.553 TEST_HEADER include/spdk/base64.h 00:04:13.553 TEST_HEADER include/spdk/bdev.h 00:04:13.553 CC app/spdk_nvme_identify/identify.o 00:04:13.553 TEST_HEADER include/spdk/bdev_module.h 00:04:13.553 CC app/spdk_lspci/spdk_lspci.o 00:04:13.553 CC app/spdk_nvme_perf/perf.o 00:04:13.553 TEST_HEADER include/spdk/bdev_zone.h 00:04:13.553 TEST_HEADER include/spdk/bit_array.h 00:04:13.553 TEST_HEADER include/spdk/bit_pool.h 00:04:13.553 TEST_HEADER include/spdk/blob_bdev.h 00:04:13.553 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:13.553 TEST_HEADER include/spdk/blobfs.h 00:04:13.553 TEST_HEADER include/spdk/blob.h 00:04:13.553 TEST_HEADER include/spdk/conf.h 00:04:13.553 TEST_HEADER include/spdk/config.h 00:04:13.553 TEST_HEADER include/spdk/cpuset.h 00:04:13.553 TEST_HEADER include/spdk/crc16.h 00:04:13.553 TEST_HEADER include/spdk/crc32.h 00:04:13.553 TEST_HEADER include/spdk/crc64.h 00:04:13.553 TEST_HEADER include/spdk/dif.h 00:04:13.553 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:13.553 TEST_HEADER include/spdk/dma.h 00:04:13.553 TEST_HEADER include/spdk/endian.h 00:04:13.553 TEST_HEADER include/spdk/env_dpdk.h 00:04:13.553 TEST_HEADER include/spdk/env.h 00:04:13.553 TEST_HEADER include/spdk/event.h 00:04:13.554 TEST_HEADER include/spdk/fd_group.h 00:04:13.554 TEST_HEADER include/spdk/fd.h 00:04:13.554 TEST_HEADER include/spdk/file.h 00:04:13.554 CC app/iscsi_tgt/iscsi_tgt.o 00:04:13.554 CC app/nvmf_tgt/nvmf_main.o 00:04:13.554 TEST_HEADER include/spdk/ftl.h 00:04:13.554 TEST_HEADER include/spdk/gpt_spec.h 00:04:13.554 TEST_HEADER include/spdk/hexlify.h 00:04:13.554 TEST_HEADER include/spdk/histogram_data.h 00:04:13.554 TEST_HEADER include/spdk/idxd.h 00:04:13.554 TEST_HEADER include/spdk/idxd_spec.h 00:04:13.554 TEST_HEADER include/spdk/init.h 00:04:13.554 TEST_HEADER include/spdk/ioat.h 00:04:13.554 TEST_HEADER include/spdk/ioat_spec.h 00:04:13.554 TEST_HEADER include/spdk/iscsi_spec.h 00:04:13.554 CC examples/sock/hello_world/hello_sock.o 00:04:13.554 TEST_HEADER include/spdk/json.h 00:04:13.554 CC test/thread/poller_perf/poller_perf.o 00:04:13.554 TEST_HEADER include/spdk/jsonrpc.h 00:04:13.554 CC examples/accel/perf/accel_perf.o 00:04:13.554 CC examples/nvme/hello_world/hello_world.o 00:04:13.554 TEST_HEADER include/spdk/keyring.h 00:04:13.554 TEST_HEADER include/spdk/keyring_module.h 00:04:13.554 CC examples/vmd/lsvmd/lsvmd.o 00:04:13.554 CC examples/util/zipf/zipf.o 00:04:13.554 TEST_HEADER include/spdk/likely.h 00:04:13.554 TEST_HEADER include/spdk/log.h 00:04:13.554 CC examples/ioat/perf/perf.o 00:04:13.554 CC test/event/event_perf/event_perf.o 00:04:13.554 TEST_HEADER include/spdk/lvol.h 00:04:13.554 CC examples/idxd/perf/perf.o 00:04:13.554 TEST_HEADER include/spdk/memory.h 00:04:13.554 CC test/nvme/aer/aer.o 00:04:13.554 TEST_HEADER include/spdk/mmio.h 00:04:13.554 TEST_HEADER include/spdk/nbd.h 00:04:13.554 TEST_HEADER include/spdk/notify.h 00:04:13.554 CC app/spdk_tgt/spdk_tgt.o 00:04:13.554 TEST_HEADER include/spdk/nvme.h 00:04:13.554 TEST_HEADER include/spdk/nvme_intel.h 00:04:13.554 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:13.554 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:13.554 TEST_HEADER include/spdk/nvme_spec.h 00:04:13.554 TEST_HEADER include/spdk/nvme_zns.h 00:04:13.554 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:13.554 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:13.819 TEST_HEADER include/spdk/nvmf.h 00:04:13.819 TEST_HEADER include/spdk/nvmf_spec.h 00:04:13.819 CC test/dma/test_dma/test_dma.o 00:04:13.819 CC examples/bdev/hello_world/hello_bdev.o 00:04:13.819 TEST_HEADER include/spdk/nvmf_transport.h 00:04:13.819 CC test/accel/dif/dif.o 00:04:13.819 TEST_HEADER include/spdk/opal.h 00:04:13.819 TEST_HEADER include/spdk/opal_spec.h 00:04:13.819 CC examples/thread/thread/thread_ex.o 00:04:13.819 CC test/blobfs/mkfs/mkfs.o 00:04:13.819 TEST_HEADER include/spdk/pci_ids.h 00:04:13.819 CC test/bdev/bdevio/bdevio.o 00:04:13.819 TEST_HEADER include/spdk/pipe.h 00:04:13.819 TEST_HEADER include/spdk/queue.h 00:04:13.819 CC examples/blob/hello_world/hello_blob.o 00:04:13.819 TEST_HEADER include/spdk/reduce.h 00:04:13.819 CC examples/nvmf/nvmf/nvmf.o 00:04:13.819 TEST_HEADER include/spdk/rpc.h 00:04:13.819 CC test/app/bdev_svc/bdev_svc.o 00:04:13.819 TEST_HEADER include/spdk/scheduler.h 00:04:13.819 TEST_HEADER include/spdk/scsi.h 00:04:13.819 TEST_HEADER include/spdk/scsi_spec.h 00:04:13.819 TEST_HEADER include/spdk/sock.h 00:04:13.819 TEST_HEADER include/spdk/stdinc.h 00:04:13.819 TEST_HEADER include/spdk/string.h 00:04:13.819 TEST_HEADER include/spdk/thread.h 00:04:13.819 TEST_HEADER include/spdk/trace.h 00:04:13.819 CC test/env/mem_callbacks/mem_callbacks.o 00:04:13.819 TEST_HEADER include/spdk/trace_parser.h 00:04:13.819 TEST_HEADER include/spdk/tree.h 00:04:13.819 TEST_HEADER include/spdk/ublk.h 00:04:13.819 TEST_HEADER include/spdk/util.h 00:04:13.819 TEST_HEADER include/spdk/uuid.h 00:04:13.819 TEST_HEADER include/spdk/version.h 00:04:13.819 CC test/lvol/esnap/esnap.o 00:04:13.819 LINK spdk_lspci 00:04:13.819 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:13.819 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:13.819 TEST_HEADER include/spdk/vhost.h 00:04:13.819 TEST_HEADER include/spdk/vmd.h 00:04:13.819 TEST_HEADER include/spdk/xor.h 00:04:13.819 TEST_HEADER include/spdk/zipf.h 00:04:13.819 CXX test/cpp_headers/accel.o 00:04:13.819 LINK rpc_client_test 00:04:13.819 LINK lsvmd 00:04:13.819 LINK zipf 00:04:13.819 LINK interrupt_tgt 00:04:14.081 LINK nvmf_tgt 00:04:14.081 LINK event_perf 00:04:14.081 LINK poller_perf 00:04:14.081 LINK spdk_trace_record 00:04:14.081 LINK iscsi_tgt 00:04:14.081 LINK mkfs 00:04:14.081 LINK hello_sock 00:04:14.081 LINK ioat_perf 00:04:14.081 LINK hello_world 00:04:14.081 LINK spdk_tgt 00:04:14.081 LINK bdev_svc 00:04:14.081 CXX test/cpp_headers/accel_module.o 00:04:14.081 LINK hello_blob 00:04:14.081 LINK thread 00:04:14.375 LINK hello_bdev 00:04:14.375 CC examples/nvme/reconnect/reconnect.o 00:04:14.375 LINK aer 00:04:14.375 LINK spdk_trace 00:04:14.375 CC test/event/reactor/reactor.o 00:04:14.375 LINK idxd_perf 00:04:14.375 CC examples/vmd/led/led.o 00:04:14.375 CC examples/ioat/verify/verify.o 00:04:14.375 LINK nvmf 00:04:14.375 LINK test_dma 00:04:14.375 CC test/env/vtophys/vtophys.o 00:04:14.375 CC test/nvme/reset/reset.o 00:04:14.375 LINK dif 00:04:14.375 CC test/app/histogram_perf/histogram_perf.o 00:04:14.375 CXX test/cpp_headers/assert.o 00:04:14.676 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:14.676 LINK accel_perf 00:04:14.676 CC examples/blob/cli/blobcli.o 00:04:14.676 LINK bdevio 00:04:14.676 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.676 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.676 LINK reactor 00:04:14.676 CC test/event/reactor_perf/reactor_perf.o 00:04:14.676 CXX test/cpp_headers/barrier.o 00:04:14.676 CC examples/bdev/bdevperf/bdevperf.o 00:04:14.676 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.676 CC test/env/memory/memory_ut.o 00:04:14.676 LINK led 00:04:14.676 CC examples/nvme/arbitration/arbitration.o 00:04:14.676 LINK histogram_perf 00:04:14.676 CC test/event/app_repeat/app_repeat.o 00:04:14.676 LINK vtophys 00:04:14.676 CC app/spdk_top/spdk_top.o 00:04:14.676 CC test/nvme/sgl/sgl.o 00:04:14.676 CC test/env/pci/pci_ut.o 00:04:14.676 CXX test/cpp_headers/base64.o 00:04:14.942 LINK verify 00:04:14.942 CC test/event/scheduler/scheduler.o 00:04:14.942 LINK reactor_perf 00:04:14.942 LINK spdk_nvme_perf 00:04:14.942 LINK reconnect 00:04:14.942 CC examples/nvme/hotplug/hotplug.o 00:04:14.942 LINK mem_callbacks 00:04:14.942 CXX test/cpp_headers/bdev.o 00:04:14.942 LINK spdk_nvme_discover 00:04:14.942 CC test/nvme/e2edp/nvme_dp.o 00:04:14.942 LINK reset 00:04:14.942 LINK env_dpdk_post_init 00:04:14.942 CC test/app/jsoncat/jsoncat.o 00:04:14.942 LINK app_repeat 00:04:15.203 CC test/nvme/overhead/overhead.o 00:04:15.203 CXX test/cpp_headers/bdev_module.o 00:04:15.203 CC app/vhost/vhost.o 00:04:15.203 LINK spdk_nvme_identify 00:04:15.203 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:15.203 CXX test/cpp_headers/bdev_zone.o 00:04:15.203 CC examples/nvme/abort/abort.o 00:04:15.203 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:15.203 CC test/nvme/err_injection/err_injection.o 00:04:15.203 LINK scheduler 00:04:15.468 CC test/nvme/startup/startup.o 00:04:15.468 LINK sgl 00:04:15.468 LINK jsoncat 00:04:15.468 CXX test/cpp_headers/bit_array.o 00:04:15.468 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:15.468 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:15.468 LINK nvme_manage 00:04:15.468 LINK arbitration 00:04:15.468 LINK nvme_fuzz 00:04:15.468 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:15.468 LINK hotplug 00:04:15.468 CC test/app/stub/stub.o 00:04:15.468 CC test/nvme/reserve/reserve.o 00:04:15.468 LINK blobcli 00:04:15.468 LINK vhost 00:04:15.468 LINK pci_ut 00:04:15.468 CC test/nvme/simple_copy/simple_copy.o 00:04:15.468 CC app/spdk_dd/spdk_dd.o 00:04:15.468 LINK cmb_copy 00:04:15.468 LINK err_injection 00:04:15.468 LINK nvme_dp 00:04:15.731 LINK pmr_persistence 00:04:15.731 CC test/nvme/connect_stress/connect_stress.o 00:04:15.731 LINK overhead 00:04:15.731 CC test/nvme/boot_partition/boot_partition.o 00:04:15.731 LINK startup 00:04:15.731 CC test/nvme/compliance/nvme_compliance.o 00:04:15.731 CXX test/cpp_headers/bit_pool.o 00:04:15.731 CXX test/cpp_headers/blob_bdev.o 00:04:15.731 CXX test/cpp_headers/blobfs_bdev.o 00:04:15.731 CXX test/cpp_headers/blobfs.o 00:04:15.731 CC test/nvme/fused_ordering/fused_ordering.o 00:04:15.731 CXX test/cpp_headers/blob.o 00:04:15.731 LINK stub 00:04:15.731 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:15.731 CC test/nvme/fdp/fdp.o 00:04:15.731 CC test/nvme/cuse/cuse.o 00:04:15.731 CXX test/cpp_headers/conf.o 00:04:15.992 LINK reserve 00:04:15.992 CXX test/cpp_headers/config.o 00:04:15.992 CXX test/cpp_headers/cpuset.o 00:04:15.992 LINK abort 00:04:15.992 CXX test/cpp_headers/crc16.o 00:04:15.992 CC app/fio/nvme/fio_plugin.o 00:04:15.992 CXX test/cpp_headers/crc32.o 00:04:15.992 CXX test/cpp_headers/crc64.o 00:04:15.992 CXX test/cpp_headers/dif.o 00:04:15.992 LINK connect_stress 00:04:15.992 CXX test/cpp_headers/dma.o 00:04:15.992 LINK simple_copy 00:04:15.992 LINK bdevperf 00:04:15.992 LINK boot_partition 00:04:15.992 CC app/fio/bdev/fio_plugin.o 00:04:15.992 CXX test/cpp_headers/endian.o 00:04:16.254 CXX test/cpp_headers/env_dpdk.o 00:04:16.255 CXX test/cpp_headers/env.o 00:04:16.255 CXX test/cpp_headers/event.o 00:04:16.255 CXX test/cpp_headers/fd_group.o 00:04:16.255 CXX test/cpp_headers/fd.o 00:04:16.255 CXX test/cpp_headers/file.o 00:04:16.255 LINK fused_ordering 00:04:16.255 LINK doorbell_aers 00:04:16.255 CXX test/cpp_headers/ftl.o 00:04:16.255 CXX test/cpp_headers/gpt_spec.o 00:04:16.255 CXX test/cpp_headers/hexlify.o 00:04:16.255 CXX test/cpp_headers/histogram_data.o 00:04:16.255 LINK vhost_fuzz 00:04:16.255 CXX test/cpp_headers/idxd.o 00:04:16.255 CXX test/cpp_headers/idxd_spec.o 00:04:16.255 LINK nvme_compliance 00:04:16.255 CXX test/cpp_headers/init.o 00:04:16.255 CXX test/cpp_headers/ioat.o 00:04:16.255 LINK spdk_dd 00:04:16.255 CXX test/cpp_headers/ioat_spec.o 00:04:16.255 CXX test/cpp_headers/iscsi_spec.o 00:04:16.518 LINK spdk_top 00:04:16.518 CXX test/cpp_headers/json.o 00:04:16.518 CXX test/cpp_headers/jsonrpc.o 00:04:16.518 CXX test/cpp_headers/keyring.o 00:04:16.518 CXX test/cpp_headers/keyring_module.o 00:04:16.518 CXX test/cpp_headers/likely.o 00:04:16.518 CXX test/cpp_headers/log.o 00:04:16.518 CXX test/cpp_headers/lvol.o 00:04:16.518 LINK fdp 00:04:16.518 CXX test/cpp_headers/memory.o 00:04:16.518 CXX test/cpp_headers/mmio.o 00:04:16.518 CXX test/cpp_headers/nbd.o 00:04:16.518 CXX test/cpp_headers/notify.o 00:04:16.518 CXX test/cpp_headers/nvme.o 00:04:16.518 CXX test/cpp_headers/nvme_intel.o 00:04:16.518 CXX test/cpp_headers/nvme_ocssd.o 00:04:16.518 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:16.518 LINK memory_ut 00:04:16.518 CXX test/cpp_headers/nvme_spec.o 00:04:16.518 CXX test/cpp_headers/nvme_zns.o 00:04:16.780 CXX test/cpp_headers/nvmf_cmd.o 00:04:16.780 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:16.780 CXX test/cpp_headers/nvmf.o 00:04:16.780 CXX test/cpp_headers/nvmf_spec.o 00:04:16.780 CXX test/cpp_headers/nvmf_transport.o 00:04:16.780 CXX test/cpp_headers/opal.o 00:04:16.780 CXX test/cpp_headers/opal_spec.o 00:04:16.780 CXX test/cpp_headers/pci_ids.o 00:04:16.780 CXX test/cpp_headers/pipe.o 00:04:16.780 CXX test/cpp_headers/queue.o 00:04:16.780 CXX test/cpp_headers/reduce.o 00:04:16.780 CXX test/cpp_headers/rpc.o 00:04:16.780 CXX test/cpp_headers/scheduler.o 00:04:16.780 CXX test/cpp_headers/scsi.o 00:04:16.780 CXX test/cpp_headers/scsi_spec.o 00:04:16.780 CXX test/cpp_headers/sock.o 00:04:16.780 CXX test/cpp_headers/stdinc.o 00:04:16.780 CXX test/cpp_headers/string.o 00:04:16.780 CXX test/cpp_headers/thread.o 00:04:16.780 CXX test/cpp_headers/trace.o 00:04:17.039 CXX test/cpp_headers/trace_parser.o 00:04:17.039 CXX test/cpp_headers/tree.o 00:04:17.039 CXX test/cpp_headers/ublk.o 00:04:17.039 CXX test/cpp_headers/util.o 00:04:17.039 LINK spdk_nvme 00:04:17.039 CXX test/cpp_headers/uuid.o 00:04:17.039 CXX test/cpp_headers/version.o 00:04:17.039 CXX test/cpp_headers/vfio_user_pci.o 00:04:17.039 LINK spdk_bdev 00:04:17.039 CXX test/cpp_headers/vfio_user_spec.o 00:04:17.039 CXX test/cpp_headers/vhost.o 00:04:17.039 CXX test/cpp_headers/vmd.o 00:04:17.039 CXX test/cpp_headers/xor.o 00:04:17.039 CXX test/cpp_headers/zipf.o 00:04:17.606 LINK cuse 00:04:18.182 LINK iscsi_fuzz 00:04:20.710 LINK esnap 00:04:20.995 00:04:20.995 real 0m56.909s 00:04:20.995 user 11m8.143s 00:04:20.995 sys 2m22.794s 00:04:20.995 00:41:08 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:20.995 00:41:08 make -- common/autotest_common.sh@10 -- $ set +x 00:04:20.995 ************************************ 00:04:20.995 END TEST make 00:04:20.995 ************************************ 00:04:20.995 00:41:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:20.995 00:41:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:20.995 00:41:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:20.995 00:41:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.995 00:41:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:20.995 00:41:08 -- pm/common@44 -- $ pid=3850763 00:04:20.995 00:41:08 -- pm/common@50 -- $ kill -TERM 3850763 00:04:20.995 00:41:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.995 00:41:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:20.995 00:41:08 -- pm/common@44 -- $ pid=3850765 00:04:20.995 00:41:08 -- pm/common@50 -- $ kill -TERM 3850765 00:04:20.995 00:41:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.995 00:41:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:20.995 00:41:08 -- pm/common@44 -- $ pid=3850767 00:04:20.995 00:41:08 -- pm/common@50 -- $ kill -TERM 3850767 00:04:20.995 00:41:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.995 00:41:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:20.995 00:41:08 -- pm/common@44 -- $ pid=3850801 00:04:20.995 00:41:08 -- pm/common@50 -- $ sudo -E kill -TERM 3850801 00:04:21.254 00:41:08 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:21.254 00:41:08 -- nvmf/common.sh@7 -- # uname -s 00:04:21.254 00:41:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.254 00:41:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.254 00:41:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.254 00:41:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.254 00:41:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.254 00:41:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.254 00:41:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.254 00:41:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.254 00:41:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.254 00:41:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.254 00:41:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:04:21.254 00:41:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:04:21.254 00:41:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.254 00:41:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.254 00:41:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:21.254 00:41:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.254 00:41:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:21.254 00:41:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.254 00:41:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.254 00:41:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.254 00:41:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.254 00:41:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.254 00:41:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.254 00:41:08 -- paths/export.sh@5 -- # export PATH 00:04:21.254 00:41:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.254 00:41:08 -- nvmf/common.sh@47 -- # : 0 00:04:21.254 00:41:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:21.254 00:41:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:21.254 00:41:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.254 00:41:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.254 00:41:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.254 00:41:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:21.254 00:41:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:21.254 00:41:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:21.254 00:41:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:21.254 00:41:08 -- spdk/autotest.sh@32 -- # uname -s 00:04:21.254 00:41:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:21.254 00:41:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:21.254 00:41:08 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:21.254 00:41:08 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:21.254 00:41:08 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:21.254 00:41:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:21.254 00:41:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:21.254 00:41:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:21.254 00:41:08 -- spdk/autotest.sh@48 -- # udevadm_pid=3904563 00:04:21.254 00:41:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:21.254 00:41:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:21.254 00:41:08 -- pm/common@17 -- # local monitor 00:04:21.254 00:41:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.254 00:41:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.254 00:41:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.254 00:41:08 -- pm/common@21 -- # date +%s 00:04:21.254 00:41:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.254 00:41:08 -- pm/common@21 -- # date +%s 00:04:21.254 00:41:08 -- pm/common@25 -- # sleep 1 00:04:21.254 00:41:08 -- pm/common@21 -- # date +%s 00:04:21.254 00:41:08 -- pm/common@21 -- # date +%s 00:04:21.254 00:41:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726468 00:04:21.254 00:41:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726468 00:04:21.254 00:41:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726468 00:04:21.254 00:41:08 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726468 00:04:21.254 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726468_collect-vmstat.pm.log 00:04:21.254 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726468_collect-cpu-load.pm.log 00:04:21.254 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726468_collect-cpu-temp.pm.log 00:04:21.254 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726468_collect-bmc-pm.bmc.pm.log 00:04:22.192 00:41:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:22.192 00:41:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:22.192 00:41:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:22.192 00:41:09 -- common/autotest_common.sh@10 -- # set +x 00:04:22.192 00:41:09 -- spdk/autotest.sh@59 -- # create_test_list 00:04:22.192 00:41:09 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:22.192 00:41:09 -- common/autotest_common.sh@10 -- # set +x 00:04:22.192 00:41:09 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:22.192 00:41:09 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.192 00:41:09 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.192 00:41:09 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:22.192 00:41:09 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.192 00:41:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:22.192 00:41:09 -- common/autotest_common.sh@1451 -- # uname 00:04:22.192 00:41:09 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:22.192 00:41:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:22.192 00:41:09 -- common/autotest_common.sh@1471 -- # uname 00:04:22.192 00:41:09 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:22.192 00:41:09 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:22.192 00:41:09 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:22.192 00:41:09 -- spdk/autotest.sh@72 -- # hash lcov 00:04:22.192 00:41:09 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:22.192 00:41:09 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:22.192 --rc lcov_branch_coverage=1 00:04:22.192 --rc lcov_function_coverage=1 00:04:22.192 --rc genhtml_branch_coverage=1 00:04:22.192 --rc genhtml_function_coverage=1 00:04:22.192 --rc genhtml_legend=1 00:04:22.192 --rc geninfo_all_blocks=1 00:04:22.192 ' 00:04:22.192 00:41:09 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:22.192 --rc lcov_branch_coverage=1 00:04:22.192 --rc lcov_function_coverage=1 00:04:22.192 --rc genhtml_branch_coverage=1 00:04:22.192 --rc genhtml_function_coverage=1 00:04:22.192 --rc genhtml_legend=1 00:04:22.192 --rc geninfo_all_blocks=1 00:04:22.192 ' 00:04:22.192 00:41:09 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:22.192 --rc lcov_branch_coverage=1 00:04:22.192 --rc lcov_function_coverage=1 00:04:22.192 --rc genhtml_branch_coverage=1 00:04:22.192 --rc genhtml_function_coverage=1 00:04:22.192 --rc genhtml_legend=1 00:04:22.192 --rc geninfo_all_blocks=1 00:04:22.192 --no-external' 00:04:22.192 00:41:09 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:22.192 --rc lcov_branch_coverage=1 00:04:22.192 --rc lcov_function_coverage=1 00:04:22.192 --rc genhtml_branch_coverage=1 00:04:22.192 --rc genhtml_function_coverage=1 00:04:22.192 --rc genhtml_legend=1 00:04:22.192 --rc geninfo_all_blocks=1 00:04:22.192 --no-external' 00:04:22.192 00:41:09 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:22.453 lcov: LCOV version 1.14 00:04:22.453 00:41:09 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:37.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:37.334 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:37.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:37.334 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:37.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:37.335 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:37.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:37.335 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:55.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:55.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:55.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:55.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:55.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:55.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:55.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:55.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:55.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:55.446 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:55.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:55.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:55.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:55.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:55.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:55.709 00:41:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:55.709 00:41:42 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:55.709 00:41:42 -- common/autotest_common.sh@10 -- # set +x 00:04:55.709 00:41:42 -- spdk/autotest.sh@91 -- # rm -f 00:04:55.709 00:41:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.646 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:04:56.646 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:04:56.646 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:04:56.646 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:04:56.646 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:04:56.646 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:04:56.646 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:04:56.646 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:04:56.646 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:04:56.904 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:04:56.904 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:04:56.904 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:04:56.904 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:04:56.904 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:04:56.904 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:04:56.904 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:04:56.904 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:04:56.904 00:41:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:56.904 00:41:43 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:56.904 00:41:43 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:56.904 00:41:43 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:56.904 00:41:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.904 00:41:43 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:56.904 00:41:43 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:56.904 00:41:43 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:56.904 00:41:43 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.904 00:41:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:56.904 00:41:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:56.905 00:41:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:56.905 00:41:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:56.905 00:41:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:56.905 00:41:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:56.905 No valid GPT data, bailing 00:04:56.905 00:41:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:56.905 00:41:43 -- scripts/common.sh@391 -- # pt= 00:04:56.905 00:41:43 -- scripts/common.sh@392 -- # return 1 00:04:56.905 00:41:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:56.905 1+0 records in 00:04:56.905 1+0 records out 00:04:56.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00199532 s, 526 MB/s 00:04:56.905 00:41:43 -- spdk/autotest.sh@118 -- # sync 00:04:56.905 00:41:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:56.905 00:41:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:56.905 00:41:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:58.807 00:41:45 -- spdk/autotest.sh@124 -- # uname -s 00:04:58.807 00:41:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:58.807 00:41:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:58.807 00:41:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.807 00:41:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.807 00:41:45 -- common/autotest_common.sh@10 -- # set +x 00:04:58.807 ************************************ 00:04:58.807 START TEST setup.sh 00:04:58.807 ************************************ 00:04:58.807 00:41:45 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:58.807 * Looking for test storage... 00:04:58.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:58.807 00:41:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:58.807 00:41:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:58.807 00:41:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:58.807 00:41:45 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.807 00:41:45 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.807 00:41:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.807 ************************************ 00:04:58.807 START TEST acl 00:04:58.807 ************************************ 00:04:58.807 00:41:45 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:58.807 * Looking for test storage... 00:04:58.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:58.807 00:41:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:58.807 00:41:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:58.807 00:41:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:58.807 00:41:45 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:58.807 00:41:45 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.807 00:41:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:58.807 00:41:45 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:58.807 00:41:45 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:58.807 00:41:45 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.807 00:41:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:58.807 00:41:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:58.807 00:41:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:58.807 00:41:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:58.807 00:41:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:58.807 00:41:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.807 00:41:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.207 00:41:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:00.207 00:41:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:00.207 00:41:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.207 00:41:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:00.207 00:41:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.207 00:41:46 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:00.775 Hugepages 00:05:00.775 node hugesize free / total 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.776 00:05:00.776 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:00.776 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.034 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.034 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:01.035 00:41:47 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:01.035 00:41:47 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.035 00:41:47 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.035 00:41:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:01.035 ************************************ 00:05:01.035 START TEST denied 00:05:01.035 ************************************ 00:05:01.035 00:41:47 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:01.035 00:41:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:05:01.035 00:41:47 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:05:01.035 00:41:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:01.035 00:41:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.035 00:41:47 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.972 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.972 00:41:48 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.506 00:05:04.506 real 0m3.137s 00:05:04.506 user 0m0.892s 00:05:04.506 sys 0m1.477s 00:05:04.506 00:41:51 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.506 00:41:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:04.506 ************************************ 00:05:04.506 END TEST denied 00:05:04.506 ************************************ 00:05:04.506 00:41:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:04.506 00:41:51 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.506 00:41:51 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.506 00:41:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:04.506 ************************************ 00:05:04.506 START TEST allowed 00:05:04.506 ************************************ 00:05:04.506 00:41:51 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:04.506 00:41:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:05:04.506 00:41:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:04.506 00:41:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:05:04.506 00:41:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.506 00:41:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.412 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:05:06.412 00:41:53 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:06.412 00:41:53 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:06.412 00:41:53 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:06.412 00:41:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.413 00:41:53 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.349 00:05:07.349 real 0m3.219s 00:05:07.349 user 0m0.834s 00:05:07.349 sys 0m1.406s 00:05:07.349 00:41:54 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.349 00:41:54 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:07.349 ************************************ 00:05:07.349 END TEST allowed 00:05:07.349 ************************************ 00:05:07.608 00:05:07.608 real 0m8.741s 00:05:07.608 user 0m2.671s 00:05:07.608 sys 0m4.433s 00:05:07.608 00:41:54 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.609 00:41:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:07.609 ************************************ 00:05:07.609 END TEST acl 00:05:07.609 ************************************ 00:05:07.609 00:41:54 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:07.609 00:41:54 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.609 00:41:54 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.609 00:41:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:07.609 ************************************ 00:05:07.609 START TEST hugepages 00:05:07.609 ************************************ 00:05:07.609 00:41:54 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:07.609 * Looking for test storage... 00:05:07.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 27179304 kB' 'MemAvailable: 31940536 kB' 'Buffers: 2716 kB' 'Cached: 18748836 kB' 'SwapCached: 0 kB' 'Active: 14815100 kB' 'Inactive: 4484684 kB' 'Active(anon): 14179140 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551064 kB' 'Mapped: 241576 kB' 'Shmem: 13630908 kB' 'KReclaimable: 231228 kB' 'Slab: 523380 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 292152 kB' 'KernelStack: 10160 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437048 kB' 'Committed_AS: 15213988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190452 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.609 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.610 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.611 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.611 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.611 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.611 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.611 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.611 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.611 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:07.611 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:07.611 00:41:54 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:07.611 00:41:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.611 00:41:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.611 00:41:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.611 ************************************ 00:05:07.611 START TEST default_setup 00:05:07.611 ************************************ 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.611 00:41:54 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.546 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:08.546 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:08.546 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:08.547 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:08.547 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:08.547 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:08.547 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:08.806 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:08.806 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:08.806 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:08.806 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:08.806 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:08.806 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:08.806 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:08.806 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:08.806 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:09.793 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:05:09.793 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:09.793 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:09.793 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.793 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.793 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:09.793 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:09.793 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:09.793 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29275516 kB' 'MemAvailable: 34036748 kB' 'Buffers: 2716 kB' 'Cached: 18748912 kB' 'SwapCached: 0 kB' 'Active: 14832600 kB' 'Inactive: 4484684 kB' 'Active(anon): 14196640 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569312 kB' 'Mapped: 241364 kB' 'Shmem: 13630984 kB' 'KReclaimable: 231228 kB' 'Slab: 523404 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 292176 kB' 'KernelStack: 10224 kB' 'PageTables: 9484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15236416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190644 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.794 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29278664 kB' 'MemAvailable: 34039896 kB' 'Buffers: 2716 kB' 'Cached: 18748912 kB' 'SwapCached: 0 kB' 'Active: 14834280 kB' 'Inactive: 4484684 kB' 'Active(anon): 14198320 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570480 kB' 'Mapped: 241364 kB' 'Shmem: 13630984 kB' 'KReclaimable: 231228 kB' 'Slab: 523404 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 292176 kB' 'KernelStack: 10496 kB' 'PageTables: 9760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15235172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190708 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.795 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.796 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29279896 kB' 'MemAvailable: 34041128 kB' 'Buffers: 2716 kB' 'Cached: 18748912 kB' 'SwapCached: 0 kB' 'Active: 14833540 kB' 'Inactive: 4484684 kB' 'Active(anon): 14197580 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569648 kB' 'Mapped: 241288 kB' 'Shmem: 13630984 kB' 'KReclaimable: 231228 kB' 'Slab: 523372 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 292144 kB' 'KernelStack: 10144 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15234296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190500 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.797 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.798 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.799 nr_hugepages=1024 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.799 resv_hugepages=0 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.799 surplus_hugepages=0 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.799 anon_hugepages=0 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29279808 kB' 'MemAvailable: 34041040 kB' 'Buffers: 2716 kB' 'Cached: 18748952 kB' 'SwapCached: 0 kB' 'Active: 14832580 kB' 'Inactive: 4484684 kB' 'Active(anon): 14196620 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568792 kB' 'Mapped: 241328 kB' 'Shmem: 13631024 kB' 'KReclaimable: 231228 kB' 'Slab: 523280 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 292052 kB' 'KernelStack: 10096 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15234316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190484 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.799 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.800 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20216920 kB' 'MemUsed: 12617772 kB' 'SwapCached: 0 kB' 'Active: 8481156 kB' 'Inactive: 1165528 kB' 'Active(anon): 8084720 kB' 'Inactive(anon): 0 kB' 'Active(file): 396436 kB' 'Inactive(file): 1165528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9253868 kB' 'Mapped: 159432 kB' 'AnonPages: 395980 kB' 'Shmem: 7691904 kB' 'KernelStack: 6136 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134340 kB' 'Slab: 283884 kB' 'SReclaimable: 134340 kB' 'SUnreclaim: 149544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.801 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.802 node0=1024 expecting 1024 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.802 00:05:09.802 real 0m2.159s 00:05:09.802 user 0m0.543s 00:05:09.802 sys 0m0.701s 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.802 00:41:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:09.802 ************************************ 00:05:09.802 END TEST default_setup 00:05:09.802 ************************************ 00:05:09.802 00:41:56 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:09.802 00:41:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.802 00:41:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.802 00:41:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.802 ************************************ 00:05:09.802 START TEST per_node_1G_alloc 00:05:09.802 ************************************ 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:09.803 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.080 00:41:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.024 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:11.024 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:11.024 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:11.024 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:11.024 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:11.024 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:11.025 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:11.025 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:11.025 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:11.025 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:11.025 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:11.025 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:11.025 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:11.025 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:11.025 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:11.025 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:11.025 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29261932 kB' 'MemAvailable: 34023164 kB' 'Buffers: 2716 kB' 'Cached: 18749028 kB' 'SwapCached: 0 kB' 'Active: 14832984 kB' 'Inactive: 4484684 kB' 'Active(anon): 14197024 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569068 kB' 'Mapped: 241352 kB' 'Shmem: 13631100 kB' 'KReclaimable: 231228 kB' 'Slab: 523264 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 292036 kB' 'KernelStack: 10128 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15234388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190532 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.025 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29263132 kB' 'MemAvailable: 34024364 kB' 'Buffers: 2716 kB' 'Cached: 18749032 kB' 'SwapCached: 0 kB' 'Active: 14833392 kB' 'Inactive: 4484684 kB' 'Active(anon): 14197432 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569456 kB' 'Mapped: 241428 kB' 'Shmem: 13631104 kB' 'KReclaimable: 231228 kB' 'Slab: 523264 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 292036 kB' 'KernelStack: 10128 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15234408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190500 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.026 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.027 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29263132 kB' 'MemAvailable: 34024364 kB' 'Buffers: 2716 kB' 'Cached: 18749044 kB' 'SwapCached: 0 kB' 'Active: 14832940 kB' 'Inactive: 4484684 kB' 'Active(anon): 14196980 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569016 kB' 'Mapped: 241416 kB' 'Shmem: 13631116 kB' 'KReclaimable: 231228 kB' 'Slab: 523264 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 292036 kB' 'KernelStack: 10112 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15234428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190484 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.028 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.029 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.030 nr_hugepages=1024 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.030 resv_hugepages=0 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.030 surplus_hugepages=0 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.030 anon_hugepages=0 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.030 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29263408 kB' 'MemAvailable: 34024640 kB' 'Buffers: 2716 kB' 'Cached: 18749072 kB' 'SwapCached: 0 kB' 'Active: 14832888 kB' 'Inactive: 4484684 kB' 'Active(anon): 14196928 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568904 kB' 'Mapped: 241340 kB' 'Shmem: 13631144 kB' 'KReclaimable: 231228 kB' 'Slab: 523248 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 292020 kB' 'KernelStack: 10144 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15234452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190484 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.031 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.032 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 21245752 kB' 'MemUsed: 11588940 kB' 'SwapCached: 0 kB' 'Active: 8481732 kB' 'Inactive: 1165528 kB' 'Active(anon): 8085296 kB' 'Inactive(anon): 0 kB' 'Active(file): 396436 kB' 'Inactive(file): 1165528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9253912 kB' 'Mapped: 159444 kB' 'AnonPages: 396420 kB' 'Shmem: 7691948 kB' 'KernelStack: 6200 kB' 'PageTables: 5236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134340 kB' 'Slab: 283812 kB' 'SReclaimable: 134340 kB' 'SUnreclaim: 149472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.033 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456500 kB' 'MemFree: 8017912 kB' 'MemUsed: 11438588 kB' 'SwapCached: 0 kB' 'Active: 6351208 kB' 'Inactive: 3319156 kB' 'Active(anon): 6111684 kB' 'Inactive(anon): 0 kB' 'Active(file): 239524 kB' 'Inactive(file): 3319156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9497920 kB' 'Mapped: 81896 kB' 'AnonPages: 172488 kB' 'Shmem: 5939240 kB' 'KernelStack: 3944 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96888 kB' 'Slab: 239436 kB' 'SReclaimable: 96888 kB' 'SUnreclaim: 142548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.034 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.035 node0=512 expecting 512 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:11.035 node1=512 expecting 512 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:11.035 00:05:11.035 real 0m1.228s 00:05:11.035 user 0m0.599s 00:05:11.035 sys 0m0.659s 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.035 00:41:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:11.035 ************************************ 00:05:11.035 END TEST per_node_1G_alloc 00:05:11.035 ************************************ 00:05:11.296 00:41:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:11.296 00:41:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.296 00:41:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.296 00:41:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.296 ************************************ 00:05:11.296 START TEST even_2G_alloc 00:05:11.296 ************************************ 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:11.296 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.297 00:41:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.241 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:12.241 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:12.241 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:12.241 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:12.241 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:12.241 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:12.241 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:12.241 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:12.241 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:12.241 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:12.241 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:12.241 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:12.241 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:12.241 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:12.241 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:12.241 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:12.241 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29297756 kB' 'MemAvailable: 34058988 kB' 'Buffers: 2716 kB' 'Cached: 18749160 kB' 'SwapCached: 0 kB' 'Active: 14832980 kB' 'Inactive: 4484684 kB' 'Active(anon): 14197020 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569088 kB' 'Mapped: 241416 kB' 'Shmem: 13631232 kB' 'KReclaimable: 231228 kB' 'Slab: 523156 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291928 kB' 'KernelStack: 10208 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15234356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190532 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.241 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.242 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29297756 kB' 'MemAvailable: 34058988 kB' 'Buffers: 2716 kB' 'Cached: 18749164 kB' 'SwapCached: 0 kB' 'Active: 14833984 kB' 'Inactive: 4484684 kB' 'Active(anon): 14198024 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570124 kB' 'Mapped: 241492 kB' 'Shmem: 13631236 kB' 'KReclaimable: 231228 kB' 'Slab: 523156 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291928 kB' 'KernelStack: 10208 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15246800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190484 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.243 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29298296 kB' 'MemAvailable: 34059528 kB' 'Buffers: 2716 kB' 'Cached: 18749172 kB' 'SwapCached: 0 kB' 'Active: 14834004 kB' 'Inactive: 4484684 kB' 'Active(anon): 14198044 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569584 kB' 'Mapped: 241420 kB' 'Shmem: 13631244 kB' 'KReclaimable: 231228 kB' 'Slab: 523156 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291928 kB' 'KernelStack: 10144 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15234156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190436 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.244 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.245 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.246 nr_hugepages=1024 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.246 resv_hugepages=0 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.246 surplus_hugepages=0 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.246 anon_hugepages=0 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.246 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29299580 kB' 'MemAvailable: 34060812 kB' 'Buffers: 2716 kB' 'Cached: 18749208 kB' 'SwapCached: 0 kB' 'Active: 14832660 kB' 'Inactive: 4484684 kB' 'Active(anon): 14196700 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568628 kB' 'Mapped: 241360 kB' 'Shmem: 13631280 kB' 'KReclaimable: 231228 kB' 'Slab: 523156 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291928 kB' 'KernelStack: 10080 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15234552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190436 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.247 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 21249820 kB' 'MemUsed: 11584872 kB' 'SwapCached: 0 kB' 'Active: 8481376 kB' 'Inactive: 1165528 kB' 'Active(anon): 8084940 kB' 'Inactive(anon): 0 kB' 'Active(file): 396436 kB' 'Inactive(file): 1165528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9253940 kB' 'Mapped: 159456 kB' 'AnonPages: 396160 kB' 'Shmem: 7691976 kB' 'KernelStack: 6200 kB' 'PageTables: 5188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134340 kB' 'Slab: 283760 kB' 'SReclaimable: 134340 kB' 'SUnreclaim: 149420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.248 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.249 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456500 kB' 'MemFree: 8049760 kB' 'MemUsed: 11406740 kB' 'SwapCached: 0 kB' 'Active: 6351392 kB' 'Inactive: 3319156 kB' 'Active(anon): 6111868 kB' 'Inactive(anon): 0 kB' 'Active(file): 239524 kB' 'Inactive(file): 3319156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9498024 kB' 'Mapped: 81904 kB' 'AnonPages: 172576 kB' 'Shmem: 5939344 kB' 'KernelStack: 3928 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96888 kB' 'Slab: 239396 kB' 'SReclaimable: 96888 kB' 'SUnreclaim: 142508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.511 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.512 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:12.513 node0=512 expecting 512 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:12.513 node1=512 expecting 512 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:12.513 00:05:12.513 real 0m1.214s 00:05:12.513 user 0m0.600s 00:05:12.513 sys 0m0.647s 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.513 00:41:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.513 ************************************ 00:05:12.513 END TEST even_2G_alloc 00:05:12.513 ************************************ 00:05:12.513 00:41:59 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:12.513 00:41:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.513 00:41:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.513 00:41:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.513 ************************************ 00:05:12.513 START TEST odd_alloc 00:05:12.513 ************************************ 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.513 00:41:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:13.457 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:13.457 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:13.457 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:13.457 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:13.457 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:13.457 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:13.457 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:13.457 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:13.457 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:13.457 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:13.457 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:13.457 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:13.457 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:13.457 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:13.457 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:13.457 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:13.457 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29272908 kB' 'MemAvailable: 34034140 kB' 'Buffers: 2716 kB' 'Cached: 18749296 kB' 'SwapCached: 0 kB' 'Active: 14841320 kB' 'Inactive: 4484684 kB' 'Active(anon): 14205360 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577208 kB' 'Mapped: 242412 kB' 'Shmem: 13631368 kB' 'KReclaimable: 231228 kB' 'Slab: 523024 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291796 kB' 'KernelStack: 10320 kB' 'PageTables: 9708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484600 kB' 'Committed_AS: 15245256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190616 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.457 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.458 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29286464 kB' 'MemAvailable: 34047696 kB' 'Buffers: 2716 kB' 'Cached: 18749300 kB' 'SwapCached: 0 kB' 'Active: 14841144 kB' 'Inactive: 4484684 kB' 'Active(anon): 14205184 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577052 kB' 'Mapped: 242360 kB' 'Shmem: 13631372 kB' 'KReclaimable: 231228 kB' 'Slab: 523000 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291772 kB' 'KernelStack: 10288 kB' 'PageTables: 9488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484600 kB' 'Committed_AS: 15245272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190584 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.459 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.460 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29285472 kB' 'MemAvailable: 34046704 kB' 'Buffers: 2716 kB' 'Cached: 18749316 kB' 'SwapCached: 0 kB' 'Active: 14830100 kB' 'Inactive: 4484684 kB' 'Active(anon): 14194140 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566028 kB' 'Mapped: 240424 kB' 'Shmem: 13631388 kB' 'KReclaimable: 231228 kB' 'Slab: 522996 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291768 kB' 'KernelStack: 10176 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484600 kB' 'Committed_AS: 15215400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190516 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.461 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.462 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:13.463 nr_hugepages=1025 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.463 resv_hugepages=0 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.463 surplus_hugepages=0 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.463 anon_hugepages=0 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29281180 kB' 'MemAvailable: 34042412 kB' 'Buffers: 2716 kB' 'Cached: 18749336 kB' 'SwapCached: 0 kB' 'Active: 14833156 kB' 'Inactive: 4484684 kB' 'Active(anon): 14197196 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568992 kB' 'Mapped: 240812 kB' 'Shmem: 13631408 kB' 'KReclaimable: 231228 kB' 'Slab: 522988 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291760 kB' 'KernelStack: 10128 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484600 kB' 'Committed_AS: 15218972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190484 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.463 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.464 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 21227516 kB' 'MemUsed: 11607176 kB' 'SwapCached: 0 kB' 'Active: 8485016 kB' 'Inactive: 1165528 kB' 'Active(anon): 8088580 kB' 'Inactive(anon): 0 kB' 'Active(file): 396436 kB' 'Inactive(file): 1165528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9253948 kB' 'Mapped: 159904 kB' 'AnonPages: 400232 kB' 'Shmem: 7691984 kB' 'KernelStack: 6248 kB' 'PageTables: 5280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134340 kB' 'Slab: 283692 kB' 'SReclaimable: 134340 kB' 'SUnreclaim: 149352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.465 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456500 kB' 'MemFree: 8048376 kB' 'MemUsed: 11408124 kB' 'SwapCached: 0 kB' 'Active: 6349596 kB' 'Inactive: 3319156 kB' 'Active(anon): 6110072 kB' 'Inactive(anon): 0 kB' 'Active(file): 239524 kB' 'Inactive(file): 3319156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9498108 kB' 'Mapped: 80908 kB' 'AnonPages: 170720 kB' 'Shmem: 5939428 kB' 'KernelStack: 3880 kB' 'PageTables: 3532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96888 kB' 'Slab: 239296 kB' 'SReclaimable: 96888 kB' 'SUnreclaim: 142408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.466 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.467 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:13.727 node0=512 expecting 513 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:13.727 node1=513 expecting 512 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:13.727 00:05:13.727 real 0m1.122s 00:05:13.727 user 0m0.558s 00:05:13.727 sys 0m0.593s 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.727 00:42:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:13.727 ************************************ 00:05:13.727 END TEST odd_alloc 00:05:13.727 ************************************ 00:05:13.727 00:42:00 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:13.727 00:42:00 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.727 00:42:00 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.727 00:42:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.727 ************************************ 00:05:13.727 START TEST custom_alloc 00:05:13.727 ************************************ 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:13.727 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.728 00:42:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:14.671 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:14.671 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:14.671 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:14.671 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:14.671 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:14.671 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:14.671 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:14.671 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:14.671 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:14.671 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:14.671 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:14.671 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:14.671 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:14.671 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:14.671 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:14.671 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:14.671 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 28192920 kB' 'MemAvailable: 32954152 kB' 'Buffers: 2716 kB' 'Cached: 18749420 kB' 'SwapCached: 0 kB' 'Active: 14834884 kB' 'Inactive: 4484684 kB' 'Active(anon): 14198924 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570720 kB' 'Mapped: 241308 kB' 'Shmem: 13631492 kB' 'KReclaimable: 231228 kB' 'Slab: 522896 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291668 kB' 'KernelStack: 10176 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961336 kB' 'Committed_AS: 15220124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190436 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.671 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.672 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 28194612 kB' 'MemAvailable: 32955844 kB' 'Buffers: 2716 kB' 'Cached: 18749424 kB' 'SwapCached: 0 kB' 'Active: 14835436 kB' 'Inactive: 4484684 kB' 'Active(anon): 14199476 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571704 kB' 'Mapped: 241308 kB' 'Shmem: 13631496 kB' 'KReclaimable: 231228 kB' 'Slab: 522888 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291660 kB' 'KernelStack: 10144 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961336 kB' 'Committed_AS: 15221472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190408 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.673 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.674 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 28194820 kB' 'MemAvailable: 32956052 kB' 'Buffers: 2716 kB' 'Cached: 18749428 kB' 'SwapCached: 0 kB' 'Active: 14829776 kB' 'Inactive: 4484684 kB' 'Active(anon): 14193816 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565588 kB' 'Mapped: 241096 kB' 'Shmem: 13631500 kB' 'KReclaimable: 231228 kB' 'Slab: 522840 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291612 kB' 'KernelStack: 10144 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961336 kB' 'Committed_AS: 15214276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190404 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.675 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.676 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:14.677 nr_hugepages=1536 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.677 resv_hugepages=0 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.677 surplus_hugepages=0 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.677 anon_hugepages=0 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 28194820 kB' 'MemAvailable: 32956052 kB' 'Buffers: 2716 kB' 'Cached: 18749440 kB' 'SwapCached: 0 kB' 'Active: 14832196 kB' 'Inactive: 4484684 kB' 'Active(anon): 14196236 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568064 kB' 'Mapped: 241192 kB' 'Shmem: 13631512 kB' 'KReclaimable: 231228 kB' 'Slab: 522920 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291692 kB' 'KernelStack: 10128 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961336 kB' 'Committed_AS: 15218864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190420 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.677 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.678 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 21217348 kB' 'MemUsed: 11617344 kB' 'SwapCached: 0 kB' 'Active: 8480532 kB' 'Inactive: 1165528 kB' 'Active(anon): 8084096 kB' 'Inactive(anon): 0 kB' 'Active(file): 396436 kB' 'Inactive(file): 1165528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9254016 kB' 'Mapped: 160264 kB' 'AnonPages: 395152 kB' 'Shmem: 7692052 kB' 'KernelStack: 6200 kB' 'PageTables: 5028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134340 kB' 'Slab: 283660 kB' 'SReclaimable: 134340 kB' 'SUnreclaim: 149320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.679 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.680 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456500 kB' 'MemFree: 6977220 kB' 'MemUsed: 12479280 kB' 'SwapCached: 0 kB' 'Active: 6354808 kB' 'Inactive: 3319156 kB' 'Active(anon): 6115284 kB' 'Inactive(anon): 0 kB' 'Active(file): 239524 kB' 'Inactive(file): 3319156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9498184 kB' 'Mapped: 80908 kB' 'AnonPages: 175912 kB' 'Shmem: 5939504 kB' 'KernelStack: 3912 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96888 kB' 'Slab: 239256 kB' 'SReclaimable: 96888 kB' 'SUnreclaim: 142368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.681 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:14.682 node0=512 expecting 512 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:14.682 node1=1024 expecting 1024 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:14.682 00:05:14.682 real 0m1.130s 00:05:14.682 user 0m0.492s 00:05:14.682 sys 0m0.667s 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.682 00:42:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.682 ************************************ 00:05:14.682 END TEST custom_alloc 00:05:14.682 ************************************ 00:05:14.682 00:42:01 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:14.682 00:42:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.682 00:42:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.682 00:42:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.942 ************************************ 00:05:14.942 START TEST no_shrink_alloc 00:05:14.942 ************************************ 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.942 00:42:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.885 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:15.885 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:15.885 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:15.885 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:15.885 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:15.885 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:15.885 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:15.885 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:15.885 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:15.885 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:15.885 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:15.885 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:15.885 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:15.885 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:15.885 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:15.885 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:15.885 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.885 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29237784 kB' 'MemAvailable: 33999016 kB' 'Buffers: 2716 kB' 'Cached: 18749544 kB' 'SwapCached: 0 kB' 'Active: 14835756 kB' 'Inactive: 4484684 kB' 'Active(anon): 14199796 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571484 kB' 'Mapped: 241352 kB' 'Shmem: 13631616 kB' 'KReclaimable: 231228 kB' 'Slab: 522828 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291600 kB' 'KernelStack: 10128 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15220112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190424 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.886 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29237784 kB' 'MemAvailable: 33999016 kB' 'Buffers: 2716 kB' 'Cached: 18749548 kB' 'SwapCached: 0 kB' 'Active: 14835336 kB' 'Inactive: 4484684 kB' 'Active(anon): 14199376 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571028 kB' 'Mapped: 241276 kB' 'Shmem: 13631620 kB' 'KReclaimable: 231228 kB' 'Slab: 522824 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291596 kB' 'KernelStack: 10112 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15220128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190392 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.887 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.888 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29237784 kB' 'MemAvailable: 33999016 kB' 'Buffers: 2716 kB' 'Cached: 18749568 kB' 'SwapCached: 0 kB' 'Active: 14835364 kB' 'Inactive: 4484684 kB' 'Active(anon): 14199404 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571032 kB' 'Mapped: 241268 kB' 'Shmem: 13631640 kB' 'KReclaimable: 231228 kB' 'Slab: 522824 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291596 kB' 'KernelStack: 10112 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15220152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190376 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.889 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.890 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.891 nr_hugepages=1024 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.891 resv_hugepages=0 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.891 surplus_hugepages=0 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.891 anon_hugepages=0 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29237784 kB' 'MemAvailable: 33999016 kB' 'Buffers: 2716 kB' 'Cached: 18749572 kB' 'SwapCached: 0 kB' 'Active: 14836048 kB' 'Inactive: 4484684 kB' 'Active(anon): 14200088 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571776 kB' 'Mapped: 241268 kB' 'Shmem: 13631644 kB' 'KReclaimable: 231228 kB' 'Slab: 522824 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291596 kB' 'KernelStack: 10160 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15222876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190376 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.891 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.892 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20177312 kB' 'MemUsed: 12657380 kB' 'SwapCached: 0 kB' 'Active: 8487024 kB' 'Inactive: 1165528 kB' 'Active(anon): 8090588 kB' 'Inactive(anon): 0 kB' 'Active(file): 396436 kB' 'Inactive(file): 1165528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9254088 kB' 'Mapped: 160420 kB' 'AnonPages: 401652 kB' 'Shmem: 7692124 kB' 'KernelStack: 6232 kB' 'PageTables: 5220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134340 kB' 'Slab: 283612 kB' 'SReclaimable: 134340 kB' 'SUnreclaim: 149272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.893 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.894 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.895 node0=1024 expecting 1024 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.895 00:42:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.831 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:16.831 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:16.831 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:16.831 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:16.831 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:16.831 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:16.832 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:16.832 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:16.832 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:16.832 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:05:16.832 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:05:16.832 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:05:16.832 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:05:16.832 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:05:16.832 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:05:16.832 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:05:16.832 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:05:16.832 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29227228 kB' 'MemAvailable: 33988460 kB' 'Buffers: 2716 kB' 'Cached: 18749652 kB' 'SwapCached: 0 kB' 'Active: 14836456 kB' 'Inactive: 4484684 kB' 'Active(anon): 14200496 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572056 kB' 'Mapped: 241272 kB' 'Shmem: 13631724 kB' 'KReclaimable: 231228 kB' 'Slab: 522892 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291664 kB' 'KernelStack: 10192 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15220216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190472 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.832 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29226976 kB' 'MemAvailable: 33988208 kB' 'Buffers: 2716 kB' 'Cached: 18749656 kB' 'SwapCached: 0 kB' 'Active: 14836508 kB' 'Inactive: 4484684 kB' 'Active(anon): 14200548 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572012 kB' 'Mapped: 241440 kB' 'Shmem: 13631728 kB' 'KReclaimable: 231228 kB' 'Slab: 522920 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291692 kB' 'KernelStack: 10096 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15220236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190440 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.833 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.834 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.099 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29226724 kB' 'MemAvailable: 33987956 kB' 'Buffers: 2716 kB' 'Cached: 18749672 kB' 'SwapCached: 0 kB' 'Active: 14835632 kB' 'Inactive: 4484684 kB' 'Active(anon): 14199672 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571092 kB' 'Mapped: 241272 kB' 'Shmem: 13631744 kB' 'KReclaimable: 231228 kB' 'Slab: 522908 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291680 kB' 'KernelStack: 10112 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15220256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190424 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.100 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.101 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.102 nr_hugepages=1024 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.102 resv_hugepages=0 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.102 surplus_hugepages=0 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.102 anon_hugepages=0 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291192 kB' 'MemFree: 29226724 kB' 'MemAvailable: 33987956 kB' 'Buffers: 2716 kB' 'Cached: 18749696 kB' 'SwapCached: 0 kB' 'Active: 14835636 kB' 'Inactive: 4484684 kB' 'Active(anon): 14199676 kB' 'Inactive(anon): 0 kB' 'Active(file): 635960 kB' 'Inactive(file): 4484684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571092 kB' 'Mapped: 241272 kB' 'Shmem: 13631768 kB' 'KReclaimable: 231228 kB' 'Slab: 522908 kB' 'SReclaimable: 231228 kB' 'SUnreclaim: 291680 kB' 'KernelStack: 10112 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485624 kB' 'Committed_AS: 15220280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190440 kB' 'VmallocChunk: 0 kB' 'Percpu: 28672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5861668 kB' 'DirectMap2M: 31809536 kB' 'DirectMap1G: 23068672 kB' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.102 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.103 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20158328 kB' 'MemUsed: 12676364 kB' 'SwapCached: 0 kB' 'Active: 8486200 kB' 'Inactive: 1165528 kB' 'Active(anon): 8089764 kB' 'Inactive(anon): 0 kB' 'Active(file): 396436 kB' 'Inactive(file): 1165528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9254152 kB' 'Mapped: 160364 kB' 'AnonPages: 400712 kB' 'Shmem: 7692188 kB' 'KernelStack: 6200 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134340 kB' 'Slab: 283796 kB' 'SReclaimable: 134340 kB' 'SUnreclaim: 149456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.104 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.105 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:17.106 node0=1024 expecting 1024 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:17.106 00:05:17.106 real 0m2.227s 00:05:17.106 user 0m0.972s 00:05:17.106 sys 0m1.314s 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.106 00:42:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:17.106 ************************************ 00:05:17.106 END TEST no_shrink_alloc 00:05:17.106 ************************************ 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:17.106 00:42:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:17.106 00:05:17.106 real 0m9.543s 00:05:17.106 user 0m3.929s 00:05:17.106 sys 0m4.881s 00:05:17.106 00:42:04 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.106 00:42:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.106 ************************************ 00:05:17.106 END TEST hugepages 00:05:17.106 ************************************ 00:05:17.106 00:42:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:17.106 00:42:04 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.106 00:42:04 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.106 00:42:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.106 ************************************ 00:05:17.106 START TEST driver 00:05:17.106 ************************************ 00:05:17.106 00:42:04 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:17.106 * Looking for test storage... 00:05:17.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:17.106 00:42:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:17.106 00:42:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.106 00:42:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:19.637 00:42:06 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:19.637 00:42:06 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.637 00:42:06 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.637 00:42:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.637 ************************************ 00:05:19.637 START TEST guess_driver 00:05:19.637 ************************************ 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:19.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:19.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:19.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:19.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:19.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:19.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:19.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:19.637 Looking for driver=vfio-pci 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.637 00:42:06 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.572 00:42:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.508 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.508 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.508 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.508 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:21.508 00:42:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:21.508 00:42:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.508 00:42:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.409 00:05:23.409 real 0m4.212s 00:05:23.409 user 0m0.950s 00:05:23.409 sys 0m1.497s 00:05:23.409 00:42:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.409 00:42:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:23.409 ************************************ 00:05:23.409 END TEST guess_driver 00:05:23.409 ************************************ 00:05:23.667 00:05:23.667 real 0m6.404s 00:05:23.667 user 0m1.415s 00:05:23.667 sys 0m2.406s 00:05:23.667 00:42:10 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.667 00:42:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:23.667 ************************************ 00:05:23.667 END TEST driver 00:05:23.667 ************************************ 00:05:23.667 00:42:10 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:23.667 00:42:10 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.667 00:42:10 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.667 00:42:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:23.667 ************************************ 00:05:23.667 START TEST devices 00:05:23.667 ************************************ 00:05:23.667 00:42:10 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:23.667 * Looking for test storage... 00:05:23.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:23.667 00:42:10 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:23.667 00:42:10 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:23.667 00:42:10 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.667 00:42:10 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:25.045 00:42:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:25.045 00:42:11 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:25.045 No valid GPT data, bailing 00:05:25.045 00:42:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.045 00:42:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:25.045 00:42:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:25.045 00:42:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:25.045 00:42:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:25.045 00:42:11 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:25.045 00:42:11 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.045 00:42:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:25.045 ************************************ 00:05:25.045 START TEST nvme_mount 00:05:25.045 ************************************ 00:05:25.045 00:42:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:25.046 00:42:11 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:25.986 Creating new GPT entries in memory. 00:05:25.986 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:25.986 other utilities. 00:05:25.986 00:42:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:25.986 00:42:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.986 00:42:12 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:25.986 00:42:12 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:25.986 00:42:12 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:26.928 Creating new GPT entries in memory. 00:05:26.928 The operation has completed successfully. 00:05:26.928 00:42:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:26.928 00:42:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.928 00:42:13 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3920815 00:05:26.928 00:42:13 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.928 00:42:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:26.928 00:42:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.928 00:42:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:26.928 00:42:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.187 00:42:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:28.127 00:42:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:28.127 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.127 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:28.386 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:28.386 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:28.386 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:28.386 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.386 00:42:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:29.322 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:29.323 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.581 00:42:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:30.521 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:30.521 00:05:30.521 real 0m5.521s 00:05:30.521 user 0m1.190s 00:05:30.521 sys 0m2.057s 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.521 00:42:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:30.521 ************************************ 00:05:30.521 END TEST nvme_mount 00:05:30.521 ************************************ 00:05:30.521 00:42:17 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:30.521 00:42:17 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.521 00:42:17 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.521 00:42:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:30.521 ************************************ 00:05:30.521 START TEST dm_mount 00:05:30.521 ************************************ 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:30.521 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.522 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.522 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:30.522 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.522 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.522 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:30.522 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.522 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:30.522 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:30.522 00:42:17 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:31.462 Creating new GPT entries in memory. 00:05:31.462 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:31.462 other utilities. 00:05:31.462 00:42:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:31.462 00:42:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.462 00:42:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.462 00:42:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.462 00:42:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:32.844 Creating new GPT entries in memory. 00:05:32.844 The operation has completed successfully. 00:05:32.844 00:42:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:32.844 00:42:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.844 00:42:19 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:32.844 00:42:19 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:32.844 00:42:19 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:33.786 The operation has completed successfully. 00:05:33.786 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3922594 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.787 00:42:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.739 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.740 00:42:21 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.744 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:35.745 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:35.745 00:05:35.745 real 0m5.099s 00:05:35.745 user 0m0.760s 00:05:35.745 sys 0m1.318s 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.745 00:42:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:35.745 ************************************ 00:05:35.745 END TEST dm_mount 00:05:35.745 ************************************ 00:05:35.745 00:42:22 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:35.745 00:42:22 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:35.745 00:42:22 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.745 00:42:22 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.745 00:42:22 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:35.745 00:42:22 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.745 00:42:22 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:36.005 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:36.005 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:36.005 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:36.005 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:36.005 00:42:22 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:36.005 00:42:22 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:36.005 00:42:22 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:36.005 00:42:22 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.005 00:42:22 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:36.005 00:42:22 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.005 00:42:22 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:36.005 00:05:36.005 real 0m12.393s 00:05:36.005 user 0m2.577s 00:05:36.005 sys 0m4.329s 00:05:36.005 00:42:22 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.005 00:42:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:36.005 ************************************ 00:05:36.005 END TEST devices 00:05:36.005 ************************************ 00:05:36.005 00:05:36.005 real 0m37.359s 00:05:36.005 user 0m10.703s 00:05:36.005 sys 0m16.217s 00:05:36.005 00:42:22 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.005 00:42:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:36.005 ************************************ 00:05:36.005 END TEST setup.sh 00:05:36.005 ************************************ 00:05:36.005 00:42:22 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:37.385 Hugepages 00:05:37.385 node hugesize free / total 00:05:37.385 node0 1048576kB 0 / 0 00:05:37.385 node0 2048kB 2048 / 2048 00:05:37.385 node1 1048576kB 0 / 0 00:05:37.385 node1 2048kB 0 / 0 00:05:37.385 00:05:37.385 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:37.385 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:05:37.385 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:05:37.385 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:05:37.385 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:05:37.385 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:05:37.385 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:05:37.385 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:05:37.385 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:05:37.385 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:05:37.385 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:05:37.385 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:05:37.385 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:05:37.385 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:05:37.385 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:05:37.385 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:05:37.385 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:05:37.385 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:37.385 00:42:24 -- spdk/autotest.sh@130 -- # uname -s 00:05:37.385 00:42:24 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:37.385 00:42:24 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:37.385 00:42:24 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:38.324 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:38.324 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:38.324 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:38.324 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:38.324 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:38.324 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:38.324 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:38.324 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:38.324 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:38.324 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:38.324 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:38.324 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:38.324 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:38.324 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:38.324 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:38.324 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:39.263 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:05:39.263 00:42:26 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:40.202 00:42:27 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:40.202 00:42:27 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:40.202 00:42:27 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:40.202 00:42:27 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:40.202 00:42:27 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:40.202 00:42:27 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:40.202 00:42:27 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.202 00:42:27 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:40.202 00:42:27 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:40.202 00:42:27 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:40.202 00:42:27 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:05:40.202 00:42:27 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:41.140 Waiting for block devices as requested 00:05:41.399 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:05:41.399 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:05:41.399 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:05:41.399 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:05:41.658 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:05:41.658 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:05:41.658 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:05:41.658 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:05:41.971 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:05:41.971 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:05:41.971 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:05:41.971 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:05:42.231 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:05:42.231 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:05:42.231 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:05:42.490 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:05:42.490 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:05:42.490 00:42:29 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:42.490 00:42:29 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:05:42.490 00:42:29 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:42.490 00:42:29 -- common/autotest_common.sh@1498 -- # grep 0000:84:00.0/nvme/nvme 00:05:42.490 00:42:29 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:05:42.490 00:42:29 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:05:42.490 00:42:29 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:05:42.490 00:42:29 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:42.490 00:42:29 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:42.490 00:42:29 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:42.490 00:42:29 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:42.490 00:42:29 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:42.490 00:42:29 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:42.490 00:42:29 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:42.490 00:42:29 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:42.490 00:42:29 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:42.490 00:42:29 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:42.490 00:42:29 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:42.490 00:42:29 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:42.490 00:42:29 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:42.490 00:42:29 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:42.490 00:42:29 -- common/autotest_common.sh@1553 -- # continue 00:05:42.490 00:42:29 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:42.490 00:42:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.490 00:42:29 -- common/autotest_common.sh@10 -- # set +x 00:05:42.490 00:42:29 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:42.490 00:42:29 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:42.490 00:42:29 -- common/autotest_common.sh@10 -- # set +x 00:05:42.490 00:42:29 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:43.430 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:43.430 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:43.430 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:43.430 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:43.430 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:43.430 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:43.430 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:43.430 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:43.689 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:43.689 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:43.689 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:43.689 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:43.689 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:43.689 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:43.689 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:43.689 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:44.629 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:05:44.629 00:42:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:44.629 00:42:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.629 00:42:31 -- common/autotest_common.sh@10 -- # set +x 00:05:44.629 00:42:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:44.629 00:42:31 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:44.629 00:42:31 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.629 00:42:31 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:44.629 00:42:31 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:44.629 00:42:31 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:44.629 00:42:31 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:44.629 00:42:31 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:44.629 00:42:31 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.629 00:42:31 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:44.629 00:42:31 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:44.629 00:42:31 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:44.629 00:42:31 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:05:44.629 00:42:31 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:44.629 00:42:31 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:05:44.629 00:42:31 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:44.629 00:42:31 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:44.629 00:42:31 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:44.629 00:42:31 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:84:00.0 00:05:44.629 00:42:31 -- common/autotest_common.sh@1588 -- # [[ -z 0000:84:00.0 ]] 00:05:44.629 00:42:31 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3926603 00:05:44.629 00:42:31 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.629 00:42:31 -- common/autotest_common.sh@1594 -- # waitforlisten 3926603 00:05:44.629 00:42:31 -- common/autotest_common.sh@827 -- # '[' -z 3926603 ']' 00:05:44.629 00:42:31 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.629 00:42:31 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.629 00:42:31 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.629 00:42:31 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.629 00:42:31 -- common/autotest_common.sh@10 -- # set +x 00:05:44.629 [2024-05-15 00:42:31.671496] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:05:44.629 [2024-05-15 00:42:31.671603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926603 ] 00:05:44.887 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.887 [2024-05-15 00:42:31.734141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.887 [2024-05-15 00:42:31.853164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.145 00:42:32 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.145 00:42:32 -- common/autotest_common.sh@860 -- # return 0 00:05:45.145 00:42:32 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:45.145 00:42:32 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:45.145 00:42:32 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:05:48.430 nvme0n1 00:05:48.430 00:42:35 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:48.430 [2024-05-15 00:42:35.455599] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:48.430 [2024-05-15 00:42:35.455656] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:48.430 request: 00:05:48.430 { 00:05:48.430 "nvme_ctrlr_name": "nvme0", 00:05:48.430 "password": "test", 00:05:48.430 "method": "bdev_nvme_opal_revert", 00:05:48.430 "req_id": 1 00:05:48.430 } 00:05:48.430 Got JSON-RPC error response 00:05:48.430 response: 00:05:48.430 { 00:05:48.430 "code": -32603, 00:05:48.430 "message": "Internal error" 00:05:48.430 } 00:05:48.430 00:42:35 -- common/autotest_common.sh@1600 -- # true 00:05:48.430 00:42:35 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:48.430 00:42:35 -- common/autotest_common.sh@1604 -- # killprocess 3926603 00:05:48.430 00:42:35 -- common/autotest_common.sh@946 -- # '[' -z 3926603 ']' 00:05:48.430 00:42:35 -- common/autotest_common.sh@950 -- # kill -0 3926603 00:05:48.430 00:42:35 -- common/autotest_common.sh@951 -- # uname 00:05:48.430 00:42:35 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.430 00:42:35 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3926603 00:05:48.688 00:42:35 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.688 00:42:35 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.688 00:42:35 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3926603' 00:05:48.688 killing process with pid 3926603 00:05:48.688 00:42:35 -- common/autotest_common.sh@965 -- # kill 3926603 00:05:48.688 00:42:35 -- common/autotest_common.sh@970 -- # wait 3926603 00:05:50.587 00:42:37 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:50.587 00:42:37 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:50.587 00:42:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:50.587 00:42:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:50.587 00:42:37 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:50.587 00:42:37 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.587 00:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.587 00:42:37 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:50.587 00:42:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.587 00:42:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.587 00:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.587 ************************************ 00:05:50.587 START TEST env 00:05:50.588 ************************************ 00:05:50.588 00:42:37 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:50.588 * Looking for test storage... 00:05:50.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:50.588 00:42:37 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.588 00:42:37 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.588 00:42:37 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.588 00:42:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.588 ************************************ 00:05:50.588 START TEST env_memory 00:05:50.588 ************************************ 00:05:50.588 00:42:37 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:50.588 00:05:50.588 00:05:50.588 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.588 http://cunit.sourceforge.net/ 00:05:50.588 00:05:50.588 00:05:50.588 Suite: memory 00:05:50.588 Test: alloc and free memory map ...[2024-05-15 00:42:37.331053] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:50.588 passed 00:05:50.588 Test: mem map translation ...[2024-05-15 00:42:37.361255] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:50.588 [2024-05-15 00:42:37.361295] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:50.588 [2024-05-15 00:42:37.361347] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:50.588 [2024-05-15 00:42:37.361362] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:50.588 passed 00:05:50.588 Test: mem map registration ...[2024-05-15 00:42:37.422456] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:50.588 [2024-05-15 00:42:37.422483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:50.588 passed 00:05:50.588 Test: mem map adjacent registrations ...passed 00:05:50.588 00:05:50.588 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.588 suites 1 1 n/a 0 0 00:05:50.588 tests 4 4 4 0 0 00:05:50.588 asserts 152 152 152 0 n/a 00:05:50.588 00:05:50.588 Elapsed time = 0.204 seconds 00:05:50.588 00:05:50.588 real 0m0.212s 00:05:50.588 user 0m0.202s 00:05:50.588 sys 0m0.010s 00:05:50.588 00:42:37 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.588 00:42:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:50.588 ************************************ 00:05:50.588 END TEST env_memory 00:05:50.588 ************************************ 00:05:50.588 00:42:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:50.588 00:42:37 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.588 00:42:37 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.588 00:42:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.588 ************************************ 00:05:50.588 START TEST env_vtophys 00:05:50.588 ************************************ 00:05:50.588 00:42:37 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:50.588 EAL: lib.eal log level changed from notice to debug 00:05:50.588 EAL: Detected lcore 0 as core 0 on socket 0 00:05:50.588 EAL: Detected lcore 1 as core 1 on socket 0 00:05:50.588 EAL: Detected lcore 2 as core 2 on socket 0 00:05:50.588 EAL: Detected lcore 3 as core 3 on socket 0 00:05:50.588 EAL: Detected lcore 4 as core 4 on socket 0 00:05:50.588 EAL: Detected lcore 5 as core 5 on socket 0 00:05:50.588 EAL: Detected lcore 6 as core 6 on socket 0 00:05:50.588 EAL: Detected lcore 7 as core 7 on socket 0 00:05:50.588 EAL: Detected lcore 8 as core 0 on socket 1 00:05:50.588 EAL: Detected lcore 9 as core 1 on socket 1 00:05:50.588 EAL: Detected lcore 10 as core 2 on socket 1 00:05:50.588 EAL: Detected lcore 11 as core 3 on socket 1 00:05:50.588 EAL: Detected lcore 12 as core 4 on socket 1 00:05:50.588 EAL: Detected lcore 13 as core 5 on socket 1 00:05:50.588 EAL: Detected lcore 14 as core 6 on socket 1 00:05:50.588 EAL: Detected lcore 15 as core 7 on socket 1 00:05:50.588 EAL: Detected lcore 16 as core 0 on socket 0 00:05:50.588 EAL: Detected lcore 17 as core 1 on socket 0 00:05:50.588 EAL: Detected lcore 18 as core 2 on socket 0 00:05:50.588 EAL: Detected lcore 19 as core 3 on socket 0 00:05:50.588 EAL: Detected lcore 20 as core 4 on socket 0 00:05:50.588 EAL: Detected lcore 21 as core 5 on socket 0 00:05:50.588 EAL: Detected lcore 22 as core 6 on socket 0 00:05:50.588 EAL: Detected lcore 23 as core 7 on socket 0 00:05:50.588 EAL: Detected lcore 24 as core 0 on socket 1 00:05:50.588 EAL: Detected lcore 25 as core 1 on socket 1 00:05:50.588 EAL: Detected lcore 26 as core 2 on socket 1 00:05:50.588 EAL: Detected lcore 27 as core 3 on socket 1 00:05:50.588 EAL: Detected lcore 28 as core 4 on socket 1 00:05:50.588 EAL: Detected lcore 29 as core 5 on socket 1 00:05:50.588 EAL: Detected lcore 30 as core 6 on socket 1 00:05:50.588 EAL: Detected lcore 31 as core 7 on socket 1 00:05:50.588 EAL: Maximum logical cores by configuration: 128 00:05:50.588 EAL: Detected CPU lcores: 32 00:05:50.588 EAL: Detected NUMA nodes: 2 00:05:50.588 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:50.588 EAL: Detected shared linkage of DPDK 00:05:50.588 EAL: No shared files mode enabled, IPC will be disabled 00:05:50.588 EAL: Bus pci wants IOVA as 'DC' 00:05:50.588 EAL: Buses did not request a specific IOVA mode. 00:05:50.588 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:50.588 EAL: Selected IOVA mode 'VA' 00:05:50.588 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.588 EAL: Probing VFIO support... 00:05:50.588 EAL: IOMMU type 1 (Type 1) is supported 00:05:50.588 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:50.588 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:50.588 EAL: VFIO support initialized 00:05:50.588 EAL: Ask a virtual area of 0x2e000 bytes 00:05:50.588 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:50.588 EAL: Setting up physically contiguous memory... 00:05:50.588 EAL: Setting maximum number of open files to 524288 00:05:50.588 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:50.588 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:50.588 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:50.588 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.588 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:50.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.588 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.588 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:50.588 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:50.588 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.588 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:50.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.588 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.588 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:50.588 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:50.588 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.588 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:50.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.588 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.588 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:50.588 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:50.588 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.588 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:50.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.588 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.588 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:50.588 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:50.588 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:50.588 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.588 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:50.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.588 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.588 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:50.588 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:50.588 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.588 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:50.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.588 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.588 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:50.588 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:50.588 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.588 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:50.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.588 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.588 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:50.588 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:50.588 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.588 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:50.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:50.588 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.588 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:50.588 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:50.588 EAL: Hugepages will be freed exactly as allocated. 00:05:50.588 EAL: No shared files mode enabled, IPC is disabled 00:05:50.588 EAL: No shared files mode enabled, IPC is disabled 00:05:50.588 EAL: TSC frequency is ~2700000 KHz 00:05:50.588 EAL: Main lcore 0 is ready (tid=7f5d23adca00;cpuset=[0]) 00:05:50.588 EAL: Trying to obtain current memory policy. 00:05:50.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.588 EAL: Restoring previous memory policy: 0 00:05:50.588 EAL: request: mp_malloc_sync 00:05:50.588 EAL: No shared files mode enabled, IPC is disabled 00:05:50.588 EAL: Heap on socket 0 was expanded by 2MB 00:05:50.588 EAL: No shared files mode enabled, IPC is disabled 00:05:50.588 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:50.588 EAL: Mem event callback 'spdk:(nil)' registered 00:05:50.588 00:05:50.588 00:05:50.589 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.589 http://cunit.sourceforge.net/ 00:05:50.589 00:05:50.589 00:05:50.589 Suite: components_suite 00:05:50.589 Test: vtophys_malloc_test ...passed 00:05:50.589 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:50.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.589 EAL: Restoring previous memory policy: 4 00:05:50.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.589 EAL: request: mp_malloc_sync 00:05:50.589 EAL: No shared files mode enabled, IPC is disabled 00:05:50.589 EAL: Heap on socket 0 was expanded by 4MB 00:05:50.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.589 EAL: request: mp_malloc_sync 00:05:50.589 EAL: No shared files mode enabled, IPC is disabled 00:05:50.589 EAL: Heap on socket 0 was shrunk by 4MB 00:05:50.589 EAL: Trying to obtain current memory policy. 00:05:50.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.589 EAL: Restoring previous memory policy: 4 00:05:50.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.589 EAL: request: mp_malloc_sync 00:05:50.589 EAL: No shared files mode enabled, IPC is disabled 00:05:50.589 EAL: Heap on socket 0 was expanded by 6MB 00:05:50.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.589 EAL: request: mp_malloc_sync 00:05:50.589 EAL: No shared files mode enabled, IPC is disabled 00:05:50.589 EAL: Heap on socket 0 was shrunk by 6MB 00:05:50.589 EAL: Trying to obtain current memory policy. 00:05:50.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.589 EAL: Restoring previous memory policy: 4 00:05:50.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.589 EAL: request: mp_malloc_sync 00:05:50.589 EAL: No shared files mode enabled, IPC is disabled 00:05:50.589 EAL: Heap on socket 0 was expanded by 10MB 00:05:50.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.589 EAL: request: mp_malloc_sync 00:05:50.589 EAL: No shared files mode enabled, IPC is disabled 00:05:50.589 EAL: Heap on socket 0 was shrunk by 10MB 00:05:50.589 EAL: Trying to obtain current memory policy. 00:05:50.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.589 EAL: Restoring previous memory policy: 4 00:05:50.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.589 EAL: request: mp_malloc_sync 00:05:50.589 EAL: No shared files mode enabled, IPC is disabled 00:05:50.589 EAL: Heap on socket 0 was expanded by 18MB 00:05:50.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.847 EAL: request: mp_malloc_sync 00:05:50.847 EAL: No shared files mode enabled, IPC is disabled 00:05:50.847 EAL: Heap on socket 0 was shrunk by 18MB 00:05:50.847 EAL: Trying to obtain current memory policy. 00:05:50.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.847 EAL: Restoring previous memory policy: 4 00:05:50.847 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.847 EAL: request: mp_malloc_sync 00:05:50.847 EAL: No shared files mode enabled, IPC is disabled 00:05:50.847 EAL: Heap on socket 0 was expanded by 34MB 00:05:50.847 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.847 EAL: request: mp_malloc_sync 00:05:50.847 EAL: No shared files mode enabled, IPC is disabled 00:05:50.847 EAL: Heap on socket 0 was shrunk by 34MB 00:05:50.847 EAL: Trying to obtain current memory policy. 00:05:50.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.847 EAL: Restoring previous memory policy: 4 00:05:50.847 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.847 EAL: request: mp_malloc_sync 00:05:50.847 EAL: No shared files mode enabled, IPC is disabled 00:05:50.847 EAL: Heap on socket 0 was expanded by 66MB 00:05:50.847 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.847 EAL: request: mp_malloc_sync 00:05:50.847 EAL: No shared files mode enabled, IPC is disabled 00:05:50.847 EAL: Heap on socket 0 was shrunk by 66MB 00:05:50.847 EAL: Trying to obtain current memory policy. 00:05:50.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.847 EAL: Restoring previous memory policy: 4 00:05:50.847 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.848 EAL: request: mp_malloc_sync 00:05:50.848 EAL: No shared files mode enabled, IPC is disabled 00:05:50.848 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.848 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.848 EAL: request: mp_malloc_sync 00:05:50.848 EAL: No shared files mode enabled, IPC is disabled 00:05:50.848 EAL: Heap on socket 0 was shrunk by 130MB 00:05:50.848 EAL: Trying to obtain current memory policy. 00:05:50.848 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.848 EAL: Restoring previous memory policy: 4 00:05:50.848 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.848 EAL: request: mp_malloc_sync 00:05:50.848 EAL: No shared files mode enabled, IPC is disabled 00:05:50.848 EAL: Heap on socket 0 was expanded by 258MB 00:05:50.848 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.848 EAL: request: mp_malloc_sync 00:05:50.848 EAL: No shared files mode enabled, IPC is disabled 00:05:50.848 EAL: Heap on socket 0 was shrunk by 258MB 00:05:50.848 EAL: Trying to obtain current memory policy. 00:05:50.848 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.106 EAL: Restoring previous memory policy: 4 00:05:51.106 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.106 EAL: request: mp_malloc_sync 00:05:51.106 EAL: No shared files mode enabled, IPC is disabled 00:05:51.106 EAL: Heap on socket 0 was expanded by 514MB 00:05:51.106 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.106 EAL: request: mp_malloc_sync 00:05:51.106 EAL: No shared files mode enabled, IPC is disabled 00:05:51.106 EAL: Heap on socket 0 was shrunk by 514MB 00:05:51.106 EAL: Trying to obtain current memory policy. 00:05:51.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.365 EAL: Restoring previous memory policy: 4 00:05:51.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.365 EAL: request: mp_malloc_sync 00:05:51.365 EAL: No shared files mode enabled, IPC is disabled 00:05:51.365 EAL: Heap on socket 0 was expanded by 1026MB 00:05:51.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.625 EAL: request: mp_malloc_sync 00:05:51.625 EAL: No shared files mode enabled, IPC is disabled 00:05:51.625 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:51.625 passed 00:05:51.625 00:05:51.625 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.625 suites 1 1 n/a 0 0 00:05:51.625 tests 2 2 2 0 0 00:05:51.625 asserts 497 497 497 0 n/a 00:05:51.625 00:05:51.625 Elapsed time = 0.945 seconds 00:05:51.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.625 EAL: request: mp_malloc_sync 00:05:51.625 EAL: No shared files mode enabled, IPC is disabled 00:05:51.625 EAL: Heap on socket 0 was shrunk by 2MB 00:05:51.625 EAL: No shared files mode enabled, IPC is disabled 00:05:51.625 EAL: No shared files mode enabled, IPC is disabled 00:05:51.625 EAL: No shared files mode enabled, IPC is disabled 00:05:51.625 00:05:51.625 real 0m1.057s 00:05:51.625 user 0m0.512s 00:05:51.625 sys 0m0.513s 00:05:51.625 00:42:38 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.625 00:42:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:51.625 ************************************ 00:05:51.625 END TEST env_vtophys 00:05:51.625 ************************************ 00:05:51.625 00:42:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:51.625 00:42:38 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.625 00:42:38 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.625 00:42:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.625 ************************************ 00:05:51.625 START TEST env_pci 00:05:51.625 ************************************ 00:05:51.625 00:42:38 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:51.886 00:05:51.886 00:05:51.886 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.886 http://cunit.sourceforge.net/ 00:05:51.886 00:05:51.886 00:05:51.886 Suite: pci 00:05:51.886 Test: pci_hook ...[2024-05-15 00:42:38.687119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3927295 has claimed it 00:05:51.886 EAL: Cannot find device (10000:00:01.0) 00:05:51.886 EAL: Failed to attach device on primary process 00:05:51.886 passed 00:05:51.886 00:05:51.886 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.886 suites 1 1 n/a 0 0 00:05:51.886 tests 1 1 1 0 0 00:05:51.886 asserts 25 25 25 0 n/a 00:05:51.886 00:05:51.886 Elapsed time = 0.017 seconds 00:05:51.886 00:05:51.886 real 0m0.031s 00:05:51.886 user 0m0.014s 00:05:51.886 sys 0m0.016s 00:05:51.886 00:42:38 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.886 00:42:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:51.886 ************************************ 00:05:51.886 END TEST env_pci 00:05:51.886 ************************************ 00:05:51.886 00:42:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:51.886 00:42:38 env -- env/env.sh@15 -- # uname 00:05:51.886 00:42:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:51.886 00:42:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:51.886 00:42:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:51.886 00:42:38 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:51.886 00:42:38 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.886 00:42:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.886 ************************************ 00:05:51.886 START TEST env_dpdk_post_init 00:05:51.886 ************************************ 00:05:51.886 00:42:38 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:51.886 EAL: Detected CPU lcores: 32 00:05:51.886 EAL: Detected NUMA nodes: 2 00:05:51.886 EAL: Detected shared linkage of DPDK 00:05:51.886 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:51.886 EAL: Selected IOVA mode 'VA' 00:05:51.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.886 EAL: VFIO support initialized 00:05:51.886 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:51.886 EAL: Using IOMMU type 1 (Type 1) 00:05:51.886 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:05:51.886 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:05:51.886 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:05:51.886 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:05:51.886 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:05:51.886 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:05:52.146 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:05:53.082 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:05:56.361 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:05:56.361 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:05:56.361 Starting DPDK initialization... 00:05:56.361 Starting SPDK post initialization... 00:05:56.361 SPDK NVMe probe 00:05:56.361 Attaching to 0000:84:00.0 00:05:56.361 Attached to 0000:84:00.0 00:05:56.361 Cleaning up... 00:05:56.361 00:05:56.361 real 0m4.364s 00:05:56.361 user 0m3.250s 00:05:56.361 sys 0m0.179s 00:05:56.361 00:42:43 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.361 00:42:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.361 ************************************ 00:05:56.361 END TEST env_dpdk_post_init 00:05:56.361 ************************************ 00:05:56.361 00:42:43 env -- env/env.sh@26 -- # uname 00:05:56.361 00:42:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:56.361 00:42:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.361 00:42:43 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.361 00:42:43 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.361 00:42:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.361 ************************************ 00:05:56.361 START TEST env_mem_callbacks 00:05:56.361 ************************************ 00:05:56.361 00:42:43 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.361 EAL: Detected CPU lcores: 32 00:05:56.361 EAL: Detected NUMA nodes: 2 00:05:56.361 EAL: Detected shared linkage of DPDK 00:05:56.361 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.361 EAL: Selected IOVA mode 'VA' 00:05:56.361 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.361 EAL: VFIO support initialized 00:05:56.361 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.361 00:05:56.361 00:05:56.361 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.361 http://cunit.sourceforge.net/ 00:05:56.361 00:05:56.361 00:05:56.361 Suite: memory 00:05:56.361 Test: test ... 00:05:56.361 register 0x200000200000 2097152 00:05:56.361 malloc 3145728 00:05:56.361 register 0x200000400000 4194304 00:05:56.361 buf 0x200000500000 len 3145728 PASSED 00:05:56.361 malloc 64 00:05:56.361 buf 0x2000004fff40 len 64 PASSED 00:05:56.361 malloc 4194304 00:05:56.361 register 0x200000800000 6291456 00:05:56.361 buf 0x200000a00000 len 4194304 PASSED 00:05:56.361 free 0x200000500000 3145728 00:05:56.361 free 0x2000004fff40 64 00:05:56.361 unregister 0x200000400000 4194304 PASSED 00:05:56.361 free 0x200000a00000 4194304 00:05:56.361 unregister 0x200000800000 6291456 PASSED 00:05:56.361 malloc 8388608 00:05:56.361 register 0x200000400000 10485760 00:05:56.361 buf 0x200000600000 len 8388608 PASSED 00:05:56.361 free 0x200000600000 8388608 00:05:56.361 unregister 0x200000400000 10485760 PASSED 00:05:56.361 passed 00:05:56.361 00:05:56.361 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.361 suites 1 1 n/a 0 0 00:05:56.361 tests 1 1 1 0 0 00:05:56.361 asserts 15 15 15 0 n/a 00:05:56.361 00:05:56.361 Elapsed time = 0.006 seconds 00:05:56.361 00:05:56.361 real 0m0.048s 00:05:56.361 user 0m0.011s 00:05:56.361 sys 0m0.037s 00:05:56.361 00:42:43 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.361 00:42:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:56.361 ************************************ 00:05:56.361 END TEST env_mem_callbacks 00:05:56.361 ************************************ 00:05:56.361 00:05:56.361 real 0m6.068s 00:05:56.361 user 0m4.128s 00:05:56.361 sys 0m0.970s 00:05:56.361 00:42:43 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.361 00:42:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.361 ************************************ 00:05:56.361 END TEST env 00:05:56.361 ************************************ 00:05:56.361 00:42:43 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:56.361 00:42:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.361 00:42:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.361 00:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:56.361 ************************************ 00:05:56.361 START TEST rpc 00:05:56.362 ************************************ 00:05:56.362 00:42:43 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:56.362 * Looking for test storage... 00:05:56.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.362 00:42:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3927828 00:05:56.362 00:42:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.362 00:42:43 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:56.362 00:42:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3927828 00:05:56.362 00:42:43 rpc -- common/autotest_common.sh@827 -- # '[' -z 3927828 ']' 00:05:56.362 00:42:43 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.362 00:42:43 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.362 00:42:43 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.362 00:42:43 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.362 00:42:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.622 [2024-05-15 00:42:43.431604] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:05:56.622 [2024-05-15 00:42:43.431709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927828 ] 00:05:56.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.622 [2024-05-15 00:42:43.491344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.622 [2024-05-15 00:42:43.608300] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:56.622 [2024-05-15 00:42:43.608363] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3927828' to capture a snapshot of events at runtime. 00:05:56.622 [2024-05-15 00:42:43.608380] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.622 [2024-05-15 00:42:43.608393] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.622 [2024-05-15 00:42:43.608405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3927828 for offline analysis/debug. 00:05:56.622 [2024-05-15 00:42:43.608443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.880 00:42:43 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.880 00:42:43 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:56.880 00:42:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.880 00:42:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.880 00:42:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:56.880 00:42:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:56.880 00:42:43 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.880 00:42:43 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.880 00:42:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.880 ************************************ 00:05:56.880 START TEST rpc_integrity 00:05:56.880 ************************************ 00:05:56.880 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:56.880 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.880 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.880 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.880 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.880 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.880 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.880 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.880 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.880 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.880 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.139 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.139 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:57.139 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.139 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.139 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.139 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.139 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.139 { 00:05:57.139 "name": "Malloc0", 00:05:57.139 "aliases": [ 00:05:57.139 "cc93bcec-c227-4835-8d54-54a58107ddd3" 00:05:57.139 ], 00:05:57.139 "product_name": "Malloc disk", 00:05:57.139 "block_size": 512, 00:05:57.139 "num_blocks": 16384, 00:05:57.139 "uuid": "cc93bcec-c227-4835-8d54-54a58107ddd3", 00:05:57.139 "assigned_rate_limits": { 00:05:57.139 "rw_ios_per_sec": 0, 00:05:57.139 "rw_mbytes_per_sec": 0, 00:05:57.139 "r_mbytes_per_sec": 0, 00:05:57.139 "w_mbytes_per_sec": 0 00:05:57.139 }, 00:05:57.139 "claimed": false, 00:05:57.139 "zoned": false, 00:05:57.139 "supported_io_types": { 00:05:57.139 "read": true, 00:05:57.139 "write": true, 00:05:57.139 "unmap": true, 00:05:57.139 "write_zeroes": true, 00:05:57.139 "flush": true, 00:05:57.139 "reset": true, 00:05:57.139 "compare": false, 00:05:57.139 "compare_and_write": false, 00:05:57.139 "abort": true, 00:05:57.139 "nvme_admin": false, 00:05:57.139 "nvme_io": false 00:05:57.139 }, 00:05:57.139 "memory_domains": [ 00:05:57.139 { 00:05:57.139 "dma_device_id": "system", 00:05:57.139 "dma_device_type": 1 00:05:57.139 }, 00:05:57.139 { 00:05:57.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.139 "dma_device_type": 2 00:05:57.139 } 00:05:57.139 ], 00:05:57.139 "driver_specific": {} 00:05:57.139 } 00:05:57.139 ]' 00:05:57.139 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:57.139 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.139 00:42:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:57.139 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.139 00:42:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.139 [2024-05-15 00:42:44.005086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:57.139 [2024-05-15 00:42:44.005135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.139 [2024-05-15 00:42:44.005158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f9e590 00:05:57.139 [2024-05-15 00:42:44.005173] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.139 [2024-05-15 00:42:44.006709] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.139 [2024-05-15 00:42:44.006735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.139 Passthru0 00:05:57.139 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.139 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.139 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.139 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.139 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.139 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.139 { 00:05:57.139 "name": "Malloc0", 00:05:57.139 "aliases": [ 00:05:57.139 "cc93bcec-c227-4835-8d54-54a58107ddd3" 00:05:57.139 ], 00:05:57.140 "product_name": "Malloc disk", 00:05:57.140 "block_size": 512, 00:05:57.140 "num_blocks": 16384, 00:05:57.140 "uuid": "cc93bcec-c227-4835-8d54-54a58107ddd3", 00:05:57.140 "assigned_rate_limits": { 00:05:57.140 "rw_ios_per_sec": 0, 00:05:57.140 "rw_mbytes_per_sec": 0, 00:05:57.140 "r_mbytes_per_sec": 0, 00:05:57.140 "w_mbytes_per_sec": 0 00:05:57.140 }, 00:05:57.140 "claimed": true, 00:05:57.140 "claim_type": "exclusive_write", 00:05:57.140 "zoned": false, 00:05:57.140 "supported_io_types": { 00:05:57.140 "read": true, 00:05:57.140 "write": true, 00:05:57.140 "unmap": true, 00:05:57.140 "write_zeroes": true, 00:05:57.140 "flush": true, 00:05:57.140 "reset": true, 00:05:57.140 "compare": false, 00:05:57.140 "compare_and_write": false, 00:05:57.140 "abort": true, 00:05:57.140 "nvme_admin": false, 00:05:57.140 "nvme_io": false 00:05:57.140 }, 00:05:57.140 "memory_domains": [ 00:05:57.140 { 00:05:57.140 "dma_device_id": "system", 00:05:57.140 "dma_device_type": 1 00:05:57.140 }, 00:05:57.140 { 00:05:57.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.140 "dma_device_type": 2 00:05:57.140 } 00:05:57.140 ], 00:05:57.140 "driver_specific": {} 00:05:57.140 }, 00:05:57.140 { 00:05:57.140 "name": "Passthru0", 00:05:57.140 "aliases": [ 00:05:57.140 "5578e6f4-a231-5d61-a6cb-d5ba1ec2faba" 00:05:57.140 ], 00:05:57.140 "product_name": "passthru", 00:05:57.140 "block_size": 512, 00:05:57.140 "num_blocks": 16384, 00:05:57.140 "uuid": "5578e6f4-a231-5d61-a6cb-d5ba1ec2faba", 00:05:57.140 "assigned_rate_limits": { 00:05:57.140 "rw_ios_per_sec": 0, 00:05:57.140 "rw_mbytes_per_sec": 0, 00:05:57.140 "r_mbytes_per_sec": 0, 00:05:57.140 "w_mbytes_per_sec": 0 00:05:57.140 }, 00:05:57.140 "claimed": false, 00:05:57.140 "zoned": false, 00:05:57.140 "supported_io_types": { 00:05:57.140 "read": true, 00:05:57.140 "write": true, 00:05:57.140 "unmap": true, 00:05:57.140 "write_zeroes": true, 00:05:57.140 "flush": true, 00:05:57.140 "reset": true, 00:05:57.140 "compare": false, 00:05:57.140 "compare_and_write": false, 00:05:57.140 "abort": true, 00:05:57.140 "nvme_admin": false, 00:05:57.140 "nvme_io": false 00:05:57.140 }, 00:05:57.140 "memory_domains": [ 00:05:57.140 { 00:05:57.140 "dma_device_id": "system", 00:05:57.140 "dma_device_type": 1 00:05:57.140 }, 00:05:57.140 { 00:05:57.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.140 "dma_device_type": 2 00:05:57.140 } 00:05:57.140 ], 00:05:57.140 "driver_specific": { 00:05:57.140 "passthru": { 00:05:57.140 "name": "Passthru0", 00:05:57.140 "base_bdev_name": "Malloc0" 00:05:57.140 } 00:05:57.140 } 00:05:57.140 } 00:05:57.140 ]' 00:05:57.140 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:57.140 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.140 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.140 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.140 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.140 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.140 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:57.140 00:42:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.140 00:05:57.140 real 0m0.256s 00:05:57.140 user 0m0.162s 00:05:57.140 sys 0m0.027s 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.140 00:42:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.140 ************************************ 00:05:57.140 END TEST rpc_integrity 00:05:57.140 ************************************ 00:05:57.140 00:42:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:57.140 00:42:44 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.140 00:42:44 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.140 00:42:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.140 ************************************ 00:05:57.140 START TEST rpc_plugins 00:05:57.140 ************************************ 00:05:57.140 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:57.140 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:57.140 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.140 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:57.398 { 00:05:57.398 "name": "Malloc1", 00:05:57.398 "aliases": [ 00:05:57.398 "55b338bb-dc35-4a8e-8f89-b475fc280e2c" 00:05:57.398 ], 00:05:57.398 "product_name": "Malloc disk", 00:05:57.398 "block_size": 4096, 00:05:57.398 "num_blocks": 256, 00:05:57.398 "uuid": "55b338bb-dc35-4a8e-8f89-b475fc280e2c", 00:05:57.398 "assigned_rate_limits": { 00:05:57.398 "rw_ios_per_sec": 0, 00:05:57.398 "rw_mbytes_per_sec": 0, 00:05:57.398 "r_mbytes_per_sec": 0, 00:05:57.398 "w_mbytes_per_sec": 0 00:05:57.398 }, 00:05:57.398 "claimed": false, 00:05:57.398 "zoned": false, 00:05:57.398 "supported_io_types": { 00:05:57.398 "read": true, 00:05:57.398 "write": true, 00:05:57.398 "unmap": true, 00:05:57.398 "write_zeroes": true, 00:05:57.398 "flush": true, 00:05:57.398 "reset": true, 00:05:57.398 "compare": false, 00:05:57.398 "compare_and_write": false, 00:05:57.398 "abort": true, 00:05:57.398 "nvme_admin": false, 00:05:57.398 "nvme_io": false 00:05:57.398 }, 00:05:57.398 "memory_domains": [ 00:05:57.398 { 00:05:57.398 "dma_device_id": "system", 00:05:57.398 "dma_device_type": 1 00:05:57.398 }, 00:05:57.398 { 00:05:57.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.398 "dma_device_type": 2 00:05:57.398 } 00:05:57.398 ], 00:05:57.398 "driver_specific": {} 00:05:57.398 } 00:05:57.398 ]' 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:57.398 00:42:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:57.398 00:05:57.398 real 0m0.137s 00:05:57.398 user 0m0.089s 00:05:57.398 sys 0m0.008s 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.398 00:42:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.398 ************************************ 00:05:57.398 END TEST rpc_plugins 00:05:57.398 ************************************ 00:05:57.398 00:42:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:57.398 00:42:44 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.398 00:42:44 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.398 00:42:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.398 ************************************ 00:05:57.398 START TEST rpc_trace_cmd_test 00:05:57.398 ************************************ 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:57.398 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3927828", 00:05:57.398 "tpoint_group_mask": "0x8", 00:05:57.398 "iscsi_conn": { 00:05:57.398 "mask": "0x2", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "scsi": { 00:05:57.398 "mask": "0x4", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "bdev": { 00:05:57.398 "mask": "0x8", 00:05:57.398 "tpoint_mask": "0xffffffffffffffff" 00:05:57.398 }, 00:05:57.398 "nvmf_rdma": { 00:05:57.398 "mask": "0x10", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "nvmf_tcp": { 00:05:57.398 "mask": "0x20", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "ftl": { 00:05:57.398 "mask": "0x40", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "blobfs": { 00:05:57.398 "mask": "0x80", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "dsa": { 00:05:57.398 "mask": "0x200", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "thread": { 00:05:57.398 "mask": "0x400", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "nvme_pcie": { 00:05:57.398 "mask": "0x800", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "iaa": { 00:05:57.398 "mask": "0x1000", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "nvme_tcp": { 00:05:57.398 "mask": "0x2000", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "bdev_nvme": { 00:05:57.398 "mask": "0x4000", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 }, 00:05:57.398 "sock": { 00:05:57.398 "mask": "0x8000", 00:05:57.398 "tpoint_mask": "0x0" 00:05:57.398 } 00:05:57.398 }' 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:57.398 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:57.656 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:57.656 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:57.656 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:57.656 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:57.656 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:57.656 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:57.656 00:42:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:57.656 00:05:57.656 real 0m0.218s 00:05:57.656 user 0m0.192s 00:05:57.656 sys 0m0.018s 00:05:57.656 00:42:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.656 00:42:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.656 ************************************ 00:05:57.656 END TEST rpc_trace_cmd_test 00:05:57.656 ************************************ 00:05:57.656 00:42:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:57.656 00:42:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:57.656 00:42:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:57.656 00:42:44 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.656 00:42:44 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.656 00:42:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.656 ************************************ 00:05:57.656 START TEST rpc_daemon_integrity 00:05:57.656 ************************************ 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.656 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.914 { 00:05:57.914 "name": "Malloc2", 00:05:57.914 "aliases": [ 00:05:57.914 "8f48d55b-8fbe-4747-bcb5-107282460f96" 00:05:57.914 ], 00:05:57.914 "product_name": "Malloc disk", 00:05:57.914 "block_size": 512, 00:05:57.914 "num_blocks": 16384, 00:05:57.914 "uuid": "8f48d55b-8fbe-4747-bcb5-107282460f96", 00:05:57.914 "assigned_rate_limits": { 00:05:57.914 "rw_ios_per_sec": 0, 00:05:57.914 "rw_mbytes_per_sec": 0, 00:05:57.914 "r_mbytes_per_sec": 0, 00:05:57.914 "w_mbytes_per_sec": 0 00:05:57.914 }, 00:05:57.914 "claimed": false, 00:05:57.914 "zoned": false, 00:05:57.914 "supported_io_types": { 00:05:57.914 "read": true, 00:05:57.914 "write": true, 00:05:57.914 "unmap": true, 00:05:57.914 "write_zeroes": true, 00:05:57.914 "flush": true, 00:05:57.914 "reset": true, 00:05:57.914 "compare": false, 00:05:57.914 "compare_and_write": false, 00:05:57.914 "abort": true, 00:05:57.914 "nvme_admin": false, 00:05:57.914 "nvme_io": false 00:05:57.914 }, 00:05:57.914 "memory_domains": [ 00:05:57.914 { 00:05:57.914 "dma_device_id": "system", 00:05:57.914 "dma_device_type": 1 00:05:57.914 }, 00:05:57.914 { 00:05:57.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.914 "dma_device_type": 2 00:05:57.914 } 00:05:57.914 ], 00:05:57.914 "driver_specific": {} 00:05:57.914 } 00:05:57.914 ]' 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.914 [2024-05-15 00:42:44.767381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:57.914 [2024-05-15 00:42:44.767430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.914 [2024-05-15 00:42:44.767457] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f9f910 00:05:57.914 [2024-05-15 00:42:44.767473] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.914 [2024-05-15 00:42:44.768871] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.914 [2024-05-15 00:42:44.768897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.914 Passthru0 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.914 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.914 { 00:05:57.914 "name": "Malloc2", 00:05:57.915 "aliases": [ 00:05:57.915 "8f48d55b-8fbe-4747-bcb5-107282460f96" 00:05:57.915 ], 00:05:57.915 "product_name": "Malloc disk", 00:05:57.915 "block_size": 512, 00:05:57.915 "num_blocks": 16384, 00:05:57.915 "uuid": "8f48d55b-8fbe-4747-bcb5-107282460f96", 00:05:57.915 "assigned_rate_limits": { 00:05:57.915 "rw_ios_per_sec": 0, 00:05:57.915 "rw_mbytes_per_sec": 0, 00:05:57.915 "r_mbytes_per_sec": 0, 00:05:57.915 "w_mbytes_per_sec": 0 00:05:57.915 }, 00:05:57.915 "claimed": true, 00:05:57.915 "claim_type": "exclusive_write", 00:05:57.915 "zoned": false, 00:05:57.915 "supported_io_types": { 00:05:57.915 "read": true, 00:05:57.915 "write": true, 00:05:57.915 "unmap": true, 00:05:57.915 "write_zeroes": true, 00:05:57.915 "flush": true, 00:05:57.915 "reset": true, 00:05:57.915 "compare": false, 00:05:57.915 "compare_and_write": false, 00:05:57.915 "abort": true, 00:05:57.915 "nvme_admin": false, 00:05:57.915 "nvme_io": false 00:05:57.915 }, 00:05:57.915 "memory_domains": [ 00:05:57.915 { 00:05:57.915 "dma_device_id": "system", 00:05:57.915 "dma_device_type": 1 00:05:57.915 }, 00:05:57.915 { 00:05:57.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.915 "dma_device_type": 2 00:05:57.915 } 00:05:57.915 ], 00:05:57.915 "driver_specific": {} 00:05:57.915 }, 00:05:57.915 { 00:05:57.915 "name": "Passthru0", 00:05:57.915 "aliases": [ 00:05:57.915 "c4e469b9-4008-5591-b21f-e8fba7ab77d3" 00:05:57.915 ], 00:05:57.915 "product_name": "passthru", 00:05:57.915 "block_size": 512, 00:05:57.915 "num_blocks": 16384, 00:05:57.915 "uuid": "c4e469b9-4008-5591-b21f-e8fba7ab77d3", 00:05:57.915 "assigned_rate_limits": { 00:05:57.915 "rw_ios_per_sec": 0, 00:05:57.915 "rw_mbytes_per_sec": 0, 00:05:57.915 "r_mbytes_per_sec": 0, 00:05:57.915 "w_mbytes_per_sec": 0 00:05:57.915 }, 00:05:57.915 "claimed": false, 00:05:57.915 "zoned": false, 00:05:57.915 "supported_io_types": { 00:05:57.915 "read": true, 00:05:57.915 "write": true, 00:05:57.915 "unmap": true, 00:05:57.915 "write_zeroes": true, 00:05:57.915 "flush": true, 00:05:57.915 "reset": true, 00:05:57.915 "compare": false, 00:05:57.915 "compare_and_write": false, 00:05:57.915 "abort": true, 00:05:57.915 "nvme_admin": false, 00:05:57.915 "nvme_io": false 00:05:57.915 }, 00:05:57.915 "memory_domains": [ 00:05:57.915 { 00:05:57.915 "dma_device_id": "system", 00:05:57.915 "dma_device_type": 1 00:05:57.915 }, 00:05:57.915 { 00:05:57.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.915 "dma_device_type": 2 00:05:57.915 } 00:05:57.915 ], 00:05:57.915 "driver_specific": { 00:05:57.915 "passthru": { 00:05:57.915 "name": "Passthru0", 00:05:57.915 "base_bdev_name": "Malloc2" 00:05:57.915 } 00:05:57.915 } 00:05:57.915 } 00:05:57.915 ]' 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.915 00:05:57.915 real 0m0.247s 00:05:57.915 user 0m0.162s 00:05:57.915 sys 0m0.025s 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.915 00:42:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.915 ************************************ 00:05:57.915 END TEST rpc_daemon_integrity 00:05:57.915 ************************************ 00:05:57.915 00:42:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:57.915 00:42:44 rpc -- rpc/rpc.sh@84 -- # killprocess 3927828 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@946 -- # '[' -z 3927828 ']' 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@950 -- # kill -0 3927828 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@951 -- # uname 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3927828 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3927828' 00:05:57.915 killing process with pid 3927828 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@965 -- # kill 3927828 00:05:57.915 00:42:44 rpc -- common/autotest_common.sh@970 -- # wait 3927828 00:05:58.481 00:05:58.481 real 0m1.943s 00:05:58.481 user 0m2.546s 00:05:58.481 sys 0m0.571s 00:05:58.481 00:42:45 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.481 00:42:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.481 ************************************ 00:05:58.481 END TEST rpc 00:05:58.481 ************************************ 00:05:58.481 00:42:45 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:58.481 00:42:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.481 00:42:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.481 00:42:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.481 ************************************ 00:05:58.481 START TEST skip_rpc 00:05:58.481 ************************************ 00:05:58.481 00:42:45 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:58.481 * Looking for test storage... 00:05:58.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:58.481 00:42:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:58.481 00:42:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:58.481 00:42:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:58.482 00:42:45 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.482 00:42:45 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.482 00:42:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.482 ************************************ 00:05:58.482 START TEST skip_rpc 00:05:58.482 ************************************ 00:05:58.482 00:42:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:58.482 00:42:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3928201 00:05:58.482 00:42:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:58.482 00:42:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.482 00:42:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:58.482 [2024-05-15 00:42:45.471596] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:05:58.482 [2024-05-15 00:42:45.471697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928201 ] 00:05:58.482 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.482 [2024-05-15 00:42:45.531603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.740 [2024-05-15 00:42:45.650838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3928201 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3928201 ']' 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3928201 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3928201 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3928201' 00:06:04.001 killing process with pid 3928201 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3928201 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3928201 00:06:04.001 00:06:04.001 real 0m5.358s 00:06:04.001 user 0m5.074s 00:06:04.001 sys 0m0.270s 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.001 00:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.001 ************************************ 00:06:04.001 END TEST skip_rpc 00:06:04.001 ************************************ 00:06:04.001 00:42:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:04.001 00:42:50 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.001 00:42:50 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.001 00:42:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.001 ************************************ 00:06:04.001 START TEST skip_rpc_with_json 00:06:04.001 ************************************ 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3928728 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3928728 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3928728 ']' 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.001 00:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.001 [2024-05-15 00:42:50.890146] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:04.001 [2024-05-15 00:42:50.890237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928728 ] 00:06:04.001 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.001 [2024-05-15 00:42:50.948639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.259 [2024-05-15 00:42:51.065262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.259 [2024-05-15 00:42:51.297987] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:04.259 request: 00:06:04.259 { 00:06:04.259 "trtype": "tcp", 00:06:04.259 "method": "nvmf_get_transports", 00:06:04.259 "req_id": 1 00:06:04.259 } 00:06:04.259 Got JSON-RPC error response 00:06:04.259 response: 00:06:04.259 { 00:06:04.259 "code": -19, 00:06:04.259 "message": "No such device" 00:06:04.259 } 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.259 [2024-05-15 00:42:51.306103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.259 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.518 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.518 00:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.518 { 00:06:04.518 "subsystems": [ 00:06:04.518 { 00:06:04.518 "subsystem": "vfio_user_target", 00:06:04.518 "config": null 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "subsystem": "keyring", 00:06:04.518 "config": [] 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "subsystem": "iobuf", 00:06:04.518 "config": [ 00:06:04.518 { 00:06:04.518 "method": "iobuf_set_options", 00:06:04.518 "params": { 00:06:04.518 "small_pool_count": 8192, 00:06:04.518 "large_pool_count": 1024, 00:06:04.518 "small_bufsize": 8192, 00:06:04.518 "large_bufsize": 135168 00:06:04.518 } 00:06:04.518 } 00:06:04.518 ] 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "subsystem": "sock", 00:06:04.518 "config": [ 00:06:04.518 { 00:06:04.518 "method": "sock_impl_set_options", 00:06:04.518 "params": { 00:06:04.518 "impl_name": "posix", 00:06:04.518 "recv_buf_size": 2097152, 00:06:04.518 "send_buf_size": 2097152, 00:06:04.518 "enable_recv_pipe": true, 00:06:04.518 "enable_quickack": false, 00:06:04.518 "enable_placement_id": 0, 00:06:04.518 "enable_zerocopy_send_server": true, 00:06:04.518 "enable_zerocopy_send_client": false, 00:06:04.518 "zerocopy_threshold": 0, 00:06:04.518 "tls_version": 0, 00:06:04.518 "enable_ktls": false 00:06:04.518 } 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "method": "sock_impl_set_options", 00:06:04.518 "params": { 00:06:04.518 "impl_name": "ssl", 00:06:04.518 "recv_buf_size": 4096, 00:06:04.518 "send_buf_size": 4096, 00:06:04.518 "enable_recv_pipe": true, 00:06:04.518 "enable_quickack": false, 00:06:04.518 "enable_placement_id": 0, 00:06:04.518 "enable_zerocopy_send_server": true, 00:06:04.518 "enable_zerocopy_send_client": false, 00:06:04.518 "zerocopy_threshold": 0, 00:06:04.518 "tls_version": 0, 00:06:04.518 "enable_ktls": false 00:06:04.518 } 00:06:04.518 } 00:06:04.518 ] 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "subsystem": "vmd", 00:06:04.518 "config": [] 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "subsystem": "accel", 00:06:04.518 "config": [ 00:06:04.518 { 00:06:04.518 "method": "accel_set_options", 00:06:04.518 "params": { 00:06:04.518 "small_cache_size": 128, 00:06:04.518 "large_cache_size": 16, 00:06:04.518 "task_count": 2048, 00:06:04.518 "sequence_count": 2048, 00:06:04.518 "buf_count": 2048 00:06:04.518 } 00:06:04.518 } 00:06:04.518 ] 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "subsystem": "bdev", 00:06:04.518 "config": [ 00:06:04.518 { 00:06:04.518 "method": "bdev_set_options", 00:06:04.518 "params": { 00:06:04.518 "bdev_io_pool_size": 65535, 00:06:04.518 "bdev_io_cache_size": 256, 00:06:04.518 "bdev_auto_examine": true, 00:06:04.518 "iobuf_small_cache_size": 128, 00:06:04.518 "iobuf_large_cache_size": 16 00:06:04.518 } 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "method": "bdev_raid_set_options", 00:06:04.518 "params": { 00:06:04.518 "process_window_size_kb": 1024 00:06:04.518 } 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "method": "bdev_iscsi_set_options", 00:06:04.518 "params": { 00:06:04.518 "timeout_sec": 30 00:06:04.518 } 00:06:04.518 }, 00:06:04.518 { 00:06:04.518 "method": "bdev_nvme_set_options", 00:06:04.518 "params": { 00:06:04.518 "action_on_timeout": "none", 00:06:04.518 "timeout_us": 0, 00:06:04.518 "timeout_admin_us": 0, 00:06:04.518 "keep_alive_timeout_ms": 10000, 00:06:04.518 "arbitration_burst": 0, 00:06:04.518 "low_priority_weight": 0, 00:06:04.518 "medium_priority_weight": 0, 00:06:04.518 "high_priority_weight": 0, 00:06:04.518 "nvme_adminq_poll_period_us": 10000, 00:06:04.518 "nvme_ioq_poll_period_us": 0, 00:06:04.518 "io_queue_requests": 0, 00:06:04.518 "delay_cmd_submit": true, 00:06:04.518 "transport_retry_count": 4, 00:06:04.518 "bdev_retry_count": 3, 00:06:04.518 "transport_ack_timeout": 0, 00:06:04.518 "ctrlr_loss_timeout_sec": 0, 00:06:04.518 "reconnect_delay_sec": 0, 00:06:04.518 "fast_io_fail_timeout_sec": 0, 00:06:04.518 "disable_auto_failback": false, 00:06:04.518 "generate_uuids": false, 00:06:04.518 "transport_tos": 0, 00:06:04.518 "nvme_error_stat": false, 00:06:04.518 "rdma_srq_size": 0, 00:06:04.519 "io_path_stat": false, 00:06:04.519 "allow_accel_sequence": false, 00:06:04.519 "rdma_max_cq_size": 0, 00:06:04.519 "rdma_cm_event_timeout_ms": 0, 00:06:04.519 "dhchap_digests": [ 00:06:04.519 "sha256", 00:06:04.519 "sha384", 00:06:04.519 "sha512" 00:06:04.519 ], 00:06:04.519 "dhchap_dhgroups": [ 00:06:04.519 "null", 00:06:04.519 "ffdhe2048", 00:06:04.519 "ffdhe3072", 00:06:04.519 "ffdhe4096", 00:06:04.519 "ffdhe6144", 00:06:04.519 "ffdhe8192" 00:06:04.519 ] 00:06:04.519 } 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "method": "bdev_nvme_set_hotplug", 00:06:04.519 "params": { 00:06:04.519 "period_us": 100000, 00:06:04.519 "enable": false 00:06:04.519 } 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "method": "bdev_wait_for_examine" 00:06:04.519 } 00:06:04.519 ] 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "subsystem": "scsi", 00:06:04.519 "config": null 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "subsystem": "scheduler", 00:06:04.519 "config": [ 00:06:04.519 { 00:06:04.519 "method": "framework_set_scheduler", 00:06:04.519 "params": { 00:06:04.519 "name": "static" 00:06:04.519 } 00:06:04.519 } 00:06:04.519 ] 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "subsystem": "vhost_scsi", 00:06:04.519 "config": [] 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "subsystem": "vhost_blk", 00:06:04.519 "config": [] 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "subsystem": "ublk", 00:06:04.519 "config": [] 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "subsystem": "nbd", 00:06:04.519 "config": [] 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "subsystem": "nvmf", 00:06:04.519 "config": [ 00:06:04.519 { 00:06:04.519 "method": "nvmf_set_config", 00:06:04.519 "params": { 00:06:04.519 "discovery_filter": "match_any", 00:06:04.519 "admin_cmd_passthru": { 00:06:04.519 "identify_ctrlr": false 00:06:04.519 } 00:06:04.519 } 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "method": "nvmf_set_max_subsystems", 00:06:04.519 "params": { 00:06:04.519 "max_subsystems": 1024 00:06:04.519 } 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "method": "nvmf_set_crdt", 00:06:04.519 "params": { 00:06:04.519 "crdt1": 0, 00:06:04.519 "crdt2": 0, 00:06:04.519 "crdt3": 0 00:06:04.519 } 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "method": "nvmf_create_transport", 00:06:04.519 "params": { 00:06:04.519 "trtype": "TCP", 00:06:04.519 "max_queue_depth": 128, 00:06:04.519 "max_io_qpairs_per_ctrlr": 127, 00:06:04.519 "in_capsule_data_size": 4096, 00:06:04.519 "max_io_size": 131072, 00:06:04.519 "io_unit_size": 131072, 00:06:04.519 "max_aq_depth": 128, 00:06:04.519 "num_shared_buffers": 511, 00:06:04.519 "buf_cache_size": 4294967295, 00:06:04.519 "dif_insert_or_strip": false, 00:06:04.519 "zcopy": false, 00:06:04.519 "c2h_success": true, 00:06:04.519 "sock_priority": 0, 00:06:04.519 "abort_timeout_sec": 1, 00:06:04.519 "ack_timeout": 0, 00:06:04.519 "data_wr_pool_size": 0 00:06:04.519 } 00:06:04.519 } 00:06:04.519 ] 00:06:04.519 }, 00:06:04.519 { 00:06:04.519 "subsystem": "iscsi", 00:06:04.519 "config": [ 00:06:04.519 { 00:06:04.519 "method": "iscsi_set_options", 00:06:04.519 "params": { 00:06:04.519 "node_base": "iqn.2016-06.io.spdk", 00:06:04.519 "max_sessions": 128, 00:06:04.519 "max_connections_per_session": 2, 00:06:04.519 "max_queue_depth": 64, 00:06:04.519 "default_time2wait": 2, 00:06:04.519 "default_time2retain": 20, 00:06:04.519 "first_burst_length": 8192, 00:06:04.519 "immediate_data": true, 00:06:04.519 "allow_duplicated_isid": false, 00:06:04.519 "error_recovery_level": 0, 00:06:04.519 "nop_timeout": 60, 00:06:04.519 "nop_in_interval": 30, 00:06:04.519 "disable_chap": false, 00:06:04.519 "require_chap": false, 00:06:04.519 "mutual_chap": false, 00:06:04.519 "chap_group": 0, 00:06:04.519 "max_large_datain_per_connection": 64, 00:06:04.519 "max_r2t_per_connection": 4, 00:06:04.519 "pdu_pool_size": 36864, 00:06:04.519 "immediate_data_pool_size": 16384, 00:06:04.519 "data_out_pool_size": 2048 00:06:04.519 } 00:06:04.519 } 00:06:04.519 ] 00:06:04.519 } 00:06:04.519 ] 00:06:04.519 } 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3928728 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3928728 ']' 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3928728 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3928728 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3928728' 00:06:04.519 killing process with pid 3928728 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3928728 00:06:04.519 00:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3928728 00:06:04.778 00:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3928835 00:06:04.778 00:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.778 00:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3928835 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3928835 ']' 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3928835 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3928835 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3928835' 00:06:10.038 killing process with pid 3928835 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3928835 00:06:10.038 00:42:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3928835 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.297 00:06:10.297 real 0m6.345s 00:06:10.297 user 0m6.045s 00:06:10.297 sys 0m0.620s 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.297 ************************************ 00:06:10.297 END TEST skip_rpc_with_json 00:06:10.297 ************************************ 00:06:10.297 00:42:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:10.297 00:42:57 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.297 00:42:57 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.297 00:42:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.297 ************************************ 00:06:10.297 START TEST skip_rpc_with_delay 00:06:10.297 ************************************ 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.297 [2024-05-15 00:42:57.305490] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:10.297 [2024-05-15 00:42:57.305640] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.297 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.298 00:06:10.298 real 0m0.079s 00:06:10.298 user 0m0.050s 00:06:10.298 sys 0m0.029s 00:06:10.298 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.298 00:42:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:10.298 ************************************ 00:06:10.298 END TEST skip_rpc_with_delay 00:06:10.298 ************************************ 00:06:10.298 00:42:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:10.298 00:42:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:10.298 00:42:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:10.556 00:42:57 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.556 00:42:57 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.556 00:42:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.556 ************************************ 00:06:10.556 START TEST exit_on_failed_rpc_init 00:06:10.556 ************************************ 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3929384 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3929384 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3929384 ']' 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.556 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.556 [2024-05-15 00:42:57.438175] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:10.556 [2024-05-15 00:42:57.438267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929384 ] 00:06:10.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.556 [2024-05-15 00:42:57.496532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.814 [2024-05-15 00:42:57.613251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.814 00:42:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.073 [2024-05-15 00:42:57.900703] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:11.073 [2024-05-15 00:42:57.900799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929406 ] 00:06:11.073 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.073 [2024-05-15 00:42:57.961061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.073 [2024-05-15 00:42:58.080772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.073 [2024-05-15 00:42:58.080898] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:11.073 [2024-05-15 00:42:58.080918] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:11.073 [2024-05-15 00:42:58.080938] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3929384 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3929384 ']' 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3929384 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3929384 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3929384' 00:06:11.331 killing process with pid 3929384 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3929384 00:06:11.331 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3929384 00:06:11.589 00:06:11.589 real 0m1.166s 00:06:11.589 user 0m1.410s 00:06:11.589 sys 0m0.412s 00:06:11.589 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.589 00:42:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:11.589 ************************************ 00:06:11.589 END TEST exit_on_failed_rpc_init 00:06:11.589 ************************************ 00:06:11.589 00:42:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:11.589 00:06:11.589 real 0m13.254s 00:06:11.589 user 0m12.679s 00:06:11.589 sys 0m1.535s 00:06:11.589 00:42:58 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.589 00:42:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.589 ************************************ 00:06:11.589 END TEST skip_rpc 00:06:11.589 ************************************ 00:06:11.589 00:42:58 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.589 00:42:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.589 00:42:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.589 00:42:58 -- common/autotest_common.sh@10 -- # set +x 00:06:11.589 ************************************ 00:06:11.589 START TEST rpc_client 00:06:11.589 ************************************ 00:06:11.589 00:42:58 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.848 * Looking for test storage... 00:06:11.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:11.848 00:42:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:11.848 OK 00:06:11.848 00:42:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:11.848 00:06:11.848 real 0m0.070s 00:06:11.848 user 0m0.030s 00:06:11.848 sys 0m0.045s 00:06:11.848 00:42:58 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.848 00:42:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:11.848 ************************************ 00:06:11.848 END TEST rpc_client 00:06:11.848 ************************************ 00:06:11.848 00:42:58 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.848 00:42:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.848 00:42:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.848 00:42:58 -- common/autotest_common.sh@10 -- # set +x 00:06:11.848 ************************************ 00:06:11.848 START TEST json_config 00:06:11.848 ************************************ 00:06:11.848 00:42:58 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.848 00:42:58 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.848 00:42:58 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.848 00:42:58 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.848 00:42:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.848 00:42:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.848 00:42:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.848 00:42:58 json_config -- paths/export.sh@5 -- # export PATH 00:06:11.848 00:42:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@47 -- # : 0 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.848 00:42:58 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:11.848 00:42:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:11.849 INFO: JSON configuration test init 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.849 00:42:58 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:11.849 00:42:58 json_config -- json_config/common.sh@9 -- # local app=target 00:06:11.849 00:42:58 json_config -- json_config/common.sh@10 -- # shift 00:06:11.849 00:42:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.849 00:42:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.849 00:42:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.849 00:42:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.849 00:42:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.849 00:42:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3929616 00:06:11.849 00:42:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:11.849 00:42:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.849 Waiting for target to run... 00:06:11.849 00:42:58 json_config -- json_config/common.sh@25 -- # waitforlisten 3929616 /var/tmp/spdk_tgt.sock 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@827 -- # '[' -z 3929616 ']' 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.849 00:42:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.849 [2024-05-15 00:42:58.900248] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:11.849 [2024-05-15 00:42:58.900348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929616 ] 00:06:12.108 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.367 [2024-05-15 00:42:59.263343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.367 [2024-05-15 00:42:59.360528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.934 00:42:59 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.934 00:42:59 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:12.934 00:42:59 json_config -- json_config/common.sh@26 -- # echo '' 00:06:12.934 00:06:12.934 00:42:59 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:12.934 00:42:59 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:12.934 00:42:59 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:12.934 00:42:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.934 00:42:59 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:12.934 00:42:59 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:12.934 00:42:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.934 00:42:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.934 00:42:59 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:12.934 00:42:59 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:12.934 00:42:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:16.224 00:43:03 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:16.224 00:43:03 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:16.224 00:43:03 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:16.224 00:43:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.224 00:43:03 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:16.224 00:43:03 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:16.224 00:43:03 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:16.224 00:43:03 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:16.224 00:43:03 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:16.224 00:43:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:16.483 00:43:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.483 00:43:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:16.483 00:43:03 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:16.483 00:43:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:16.483 00:43:03 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:16.483 00:43:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:16.743 MallocForNvmf0 00:06:16.743 00:43:03 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:16.743 00:43:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.000 MallocForNvmf1 00:06:17.258 00:43:04 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.258 00:43:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.517 [2024-05-15 00:43:04.341267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.517 00:43:04 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.517 00:43:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.775 00:43:04 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:17.775 00:43:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.033 00:43:04 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.033 00:43:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.290 00:43:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.291 00:43:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.548 [2024-05-15 00:43:05.516587] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:18.548 [2024-05-15 00:43:05.517049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:18.548 00:43:05 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:18.548 00:43:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.548 00:43:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.548 00:43:05 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:18.548 00:43:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.548 00:43:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.548 00:43:05 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:18.548 00:43:05 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.548 00:43:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.805 MallocBdevForConfigChangeCheck 00:06:19.064 00:43:05 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:19.064 00:43:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.064 00:43:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.064 00:43:05 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:19.064 00:43:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.322 00:43:06 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:19.322 INFO: shutting down applications... 00:06:19.322 00:43:06 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:19.322 00:43:06 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:19.322 00:43:06 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:19.322 00:43:06 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:21.220 Calling clear_iscsi_subsystem 00:06:21.220 Calling clear_nvmf_subsystem 00:06:21.220 Calling clear_nbd_subsystem 00:06:21.220 Calling clear_ublk_subsystem 00:06:21.220 Calling clear_vhost_blk_subsystem 00:06:21.220 Calling clear_vhost_scsi_subsystem 00:06:21.220 Calling clear_bdev_subsystem 00:06:21.221 00:43:07 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:21.221 00:43:07 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:21.221 00:43:07 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:21.221 00:43:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.221 00:43:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:21.221 00:43:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:21.478 00:43:08 json_config -- json_config/json_config.sh@345 -- # break 00:06:21.478 00:43:08 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:21.478 00:43:08 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:21.478 00:43:08 json_config -- json_config/common.sh@31 -- # local app=target 00:06:21.478 00:43:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.478 00:43:08 json_config -- json_config/common.sh@35 -- # [[ -n 3929616 ]] 00:06:21.478 00:43:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3929616 00:06:21.478 [2024-05-15 00:43:08.406802] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:21.478 00:43:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.478 00:43:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.478 00:43:08 json_config -- json_config/common.sh@41 -- # kill -0 3929616 00:06:21.478 00:43:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.043 00:43:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.043 00:43:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.043 00:43:08 json_config -- json_config/common.sh@41 -- # kill -0 3929616 00:06:22.043 00:43:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.043 00:43:08 json_config -- json_config/common.sh@43 -- # break 00:06:22.043 00:43:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.043 00:43:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.043 SPDK target shutdown done 00:06:22.043 00:43:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:22.043 INFO: relaunching applications... 00:06:22.043 00:43:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.043 00:43:08 json_config -- json_config/common.sh@9 -- # local app=target 00:06:22.043 00:43:08 json_config -- json_config/common.sh@10 -- # shift 00:06:22.043 00:43:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.043 00:43:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.043 00:43:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.043 00:43:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.043 00:43:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.043 00:43:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3930643 00:06:22.043 00:43:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.043 00:43:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.043 Waiting for target to run... 00:06:22.043 00:43:08 json_config -- json_config/common.sh@25 -- # waitforlisten 3930643 /var/tmp/spdk_tgt.sock 00:06:22.043 00:43:08 json_config -- common/autotest_common.sh@827 -- # '[' -z 3930643 ']' 00:06:22.043 00:43:08 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.043 00:43:08 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.043 00:43:08 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.043 00:43:08 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.043 00:43:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.043 [2024-05-15 00:43:08.969800] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:22.043 [2024-05-15 00:43:08.969895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930643 ] 00:06:22.043 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.300 [2024-05-15 00:43:09.277400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.560 [2024-05-15 00:43:09.372648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.853 [2024-05-15 00:43:12.382914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.853 [2024-05-15 00:43:12.414900] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:25.853 [2024-05-15 00:43:12.415310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:25.853 00:43:12 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.853 00:43:12 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:25.853 00:43:12 json_config -- json_config/common.sh@26 -- # echo '' 00:06:25.853 00:06:25.853 00:43:12 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:25.853 00:43:12 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:25.853 INFO: Checking if target configuration is the same... 00:06:25.853 00:43:12 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.853 00:43:12 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:25.853 00:43:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:25.853 + '[' 2 -ne 2 ']' 00:06:25.853 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:25.854 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:25.854 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:25.854 +++ basename /dev/fd/62 00:06:25.854 ++ mktemp /tmp/62.XXX 00:06:25.854 + tmp_file_1=/tmp/62.fYF 00:06:25.854 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.854 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:25.854 + tmp_file_2=/tmp/spdk_tgt_config.json.uOP 00:06:25.854 + ret=0 00:06:25.854 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:25.854 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.110 + diff -u /tmp/62.fYF /tmp/spdk_tgt_config.json.uOP 00:06:26.110 + echo 'INFO: JSON config files are the same' 00:06:26.110 INFO: JSON config files are the same 00:06:26.110 + rm /tmp/62.fYF /tmp/spdk_tgt_config.json.uOP 00:06:26.110 + exit 0 00:06:26.110 00:43:12 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:26.110 00:43:12 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:26.110 INFO: changing configuration and checking if this can be detected... 00:06:26.110 00:43:12 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.110 00:43:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.366 00:43:13 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.366 00:43:13 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:26.366 00:43:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.366 + '[' 2 -ne 2 ']' 00:06:26.366 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:26.366 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:26.366 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:26.366 +++ basename /dev/fd/62 00:06:26.366 ++ mktemp /tmp/62.XXX 00:06:26.366 + tmp_file_1=/tmp/62.9HE 00:06:26.366 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.366 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.366 + tmp_file_2=/tmp/spdk_tgt_config.json.jQt 00:06:26.366 + ret=0 00:06:26.366 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.622 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.878 + diff -u /tmp/62.9HE /tmp/spdk_tgt_config.json.jQt 00:06:26.878 + ret=1 00:06:26.878 + echo '=== Start of file: /tmp/62.9HE ===' 00:06:26.878 + cat /tmp/62.9HE 00:06:26.878 + echo '=== End of file: /tmp/62.9HE ===' 00:06:26.878 + echo '' 00:06:26.878 + echo '=== Start of file: /tmp/spdk_tgt_config.json.jQt ===' 00:06:26.878 + cat /tmp/spdk_tgt_config.json.jQt 00:06:26.878 + echo '=== End of file: /tmp/spdk_tgt_config.json.jQt ===' 00:06:26.878 + echo '' 00:06:26.878 + rm /tmp/62.9HE /tmp/spdk_tgt_config.json.jQt 00:06:26.878 + exit 1 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:26.878 INFO: configuration change detected. 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@317 -- # [[ -n 3930643 ]] 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.878 00:43:13 json_config -- json_config/json_config.sh@323 -- # killprocess 3930643 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@946 -- # '[' -z 3930643 ']' 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@950 -- # kill -0 3930643 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@951 -- # uname 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3930643 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3930643' 00:06:26.878 killing process with pid 3930643 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@965 -- # kill 3930643 00:06:26.878 [2024-05-15 00:43:13.764501] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:26.878 00:43:13 json_config -- common/autotest_common.sh@970 -- # wait 3930643 00:06:28.777 00:43:15 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:28.777 00:43:15 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:28.777 00:43:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.777 00:43:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.777 00:43:15 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:28.777 00:43:15 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:28.777 INFO: Success 00:06:28.777 00:06:28.777 real 0m16.588s 00:06:28.777 user 0m19.383s 00:06:28.777 sys 0m1.933s 00:06:28.777 00:43:15 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.777 00:43:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.777 ************************************ 00:06:28.777 END TEST json_config 00:06:28.777 ************************************ 00:06:28.777 00:43:15 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:28.777 00:43:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.777 00:43:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.777 00:43:15 -- common/autotest_common.sh@10 -- # set +x 00:06:28.777 ************************************ 00:06:28.777 START TEST json_config_extra_key 00:06:28.777 ************************************ 00:06:28.777 00:43:15 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.777 00:43:15 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.777 00:43:15 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.777 00:43:15 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.777 00:43:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.777 00:43:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.777 00:43:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.777 00:43:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:28.777 00:43:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:28.777 00:43:15 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:28.777 INFO: launching applications... 00:06:28.777 00:43:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3931360 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:28.777 Waiting for target to run... 00:06:28.777 00:43:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3931360 /var/tmp/spdk_tgt.sock 00:06:28.777 00:43:15 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3931360 ']' 00:06:28.777 00:43:15 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:28.777 00:43:15 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.777 00:43:15 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:28.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:28.777 00:43:15 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.777 00:43:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:28.777 [2024-05-15 00:43:15.520611] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:28.777 [2024-05-15 00:43:15.520719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931360 ] 00:06:28.777 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.035 [2024-05-15 00:43:15.888666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.035 [2024-05-15 00:43:15.985620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.600 00:43:16 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.600 00:43:16 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:29.600 00:43:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:29.600 00:06:29.600 00:43:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:29.600 INFO: shutting down applications... 00:06:29.600 00:43:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:29.600 00:43:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:29.600 00:43:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:29.600 00:43:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3931360 ]] 00:06:29.600 00:43:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3931360 00:06:29.600 00:43:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:29.600 00:43:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.600 00:43:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3931360 00:06:29.600 00:43:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.167 00:43:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.167 00:43:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.167 00:43:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3931360 00:06:30.167 00:43:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.167 00:43:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:30.167 00:43:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.167 00:43:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.167 SPDK target shutdown done 00:06:30.167 00:43:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:30.167 Success 00:06:30.167 00:06:30.167 real 0m1.644s 00:06:30.167 user 0m1.568s 00:06:30.167 sys 0m0.450s 00:06:30.167 00:43:17 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.167 00:43:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.167 ************************************ 00:06:30.167 END TEST json_config_extra_key 00:06:30.167 ************************************ 00:06:30.167 00:43:17 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.167 00:43:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.167 00:43:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.167 00:43:17 -- common/autotest_common.sh@10 -- # set +x 00:06:30.167 ************************************ 00:06:30.167 START TEST alias_rpc 00:06:30.167 ************************************ 00:06:30.167 00:43:17 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.167 * Looking for test storage... 00:06:30.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:30.167 00:43:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.167 00:43:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3931603 00:06:30.167 00:43:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3931603 00:06:30.167 00:43:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.167 00:43:17 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3931603 ']' 00:06:30.167 00:43:17 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.167 00:43:17 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.167 00:43:17 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.167 00:43:17 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.167 00:43:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.167 [2024-05-15 00:43:17.220407] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:30.167 [2024-05-15 00:43:17.220521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931603 ] 00:06:30.425 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.425 [2024-05-15 00:43:17.282004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.425 [2024-05-15 00:43:17.398859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.682 00:43:17 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.682 00:43:17 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:30.682 00:43:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:30.941 00:43:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3931603 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3931603 ']' 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3931603 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3931603 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3931603' 00:06:30.941 killing process with pid 3931603 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@965 -- # kill 3931603 00:06:30.941 00:43:17 alias_rpc -- common/autotest_common.sh@970 -- # wait 3931603 00:06:31.508 00:06:31.508 real 0m1.192s 00:06:31.508 user 0m1.387s 00:06:31.508 sys 0m0.393s 00:06:31.508 00:43:18 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.508 00:43:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.508 ************************************ 00:06:31.508 END TEST alias_rpc 00:06:31.508 ************************************ 00:06:31.508 00:43:18 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:31.508 00:43:18 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:31.508 00:43:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.508 00:43:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.508 00:43:18 -- common/autotest_common.sh@10 -- # set +x 00:06:31.508 ************************************ 00:06:31.508 START TEST spdkcli_tcp 00:06:31.508 ************************************ 00:06:31.508 00:43:18 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:31.508 * Looking for test storage... 00:06:31.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:31.508 00:43:18 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:31.508 00:43:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3931758 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:31.508 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3931758 00:06:31.508 00:43:18 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3931758 ']' 00:06:31.508 00:43:18 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.508 00:43:18 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.508 00:43:18 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.508 00:43:18 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.508 00:43:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.508 [2024-05-15 00:43:18.469412] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:31.508 [2024-05-15 00:43:18.469508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931758 ] 00:06:31.508 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.508 [2024-05-15 00:43:18.528982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.767 [2024-05-15 00:43:18.646425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.767 [2024-05-15 00:43:18.646457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.024 00:43:18 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.024 00:43:18 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:32.024 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3931773 00:06:32.024 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:32.024 00:43:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:32.282 [ 00:06:32.282 "bdev_malloc_delete", 00:06:32.282 "bdev_malloc_create", 00:06:32.282 "bdev_null_resize", 00:06:32.282 "bdev_null_delete", 00:06:32.282 "bdev_null_create", 00:06:32.282 "bdev_nvme_cuse_unregister", 00:06:32.282 "bdev_nvme_cuse_register", 00:06:32.282 "bdev_opal_new_user", 00:06:32.282 "bdev_opal_set_lock_state", 00:06:32.282 "bdev_opal_delete", 00:06:32.282 "bdev_opal_get_info", 00:06:32.282 "bdev_opal_create", 00:06:32.282 "bdev_nvme_opal_revert", 00:06:32.282 "bdev_nvme_opal_init", 00:06:32.282 "bdev_nvme_send_cmd", 00:06:32.282 "bdev_nvme_get_path_iostat", 00:06:32.282 "bdev_nvme_get_mdns_discovery_info", 00:06:32.282 "bdev_nvme_stop_mdns_discovery", 00:06:32.282 "bdev_nvme_start_mdns_discovery", 00:06:32.282 "bdev_nvme_set_multipath_policy", 00:06:32.282 "bdev_nvme_set_preferred_path", 00:06:32.282 "bdev_nvme_get_io_paths", 00:06:32.282 "bdev_nvme_remove_error_injection", 00:06:32.282 "bdev_nvme_add_error_injection", 00:06:32.282 "bdev_nvme_get_discovery_info", 00:06:32.282 "bdev_nvme_stop_discovery", 00:06:32.282 "bdev_nvme_start_discovery", 00:06:32.282 "bdev_nvme_get_controller_health_info", 00:06:32.282 "bdev_nvme_disable_controller", 00:06:32.282 "bdev_nvme_enable_controller", 00:06:32.282 "bdev_nvme_reset_controller", 00:06:32.282 "bdev_nvme_get_transport_statistics", 00:06:32.282 "bdev_nvme_apply_firmware", 00:06:32.282 "bdev_nvme_detach_controller", 00:06:32.282 "bdev_nvme_get_controllers", 00:06:32.282 "bdev_nvme_attach_controller", 00:06:32.282 "bdev_nvme_set_hotplug", 00:06:32.282 "bdev_nvme_set_options", 00:06:32.282 "bdev_passthru_delete", 00:06:32.282 "bdev_passthru_create", 00:06:32.282 "bdev_lvol_check_shallow_copy", 00:06:32.282 "bdev_lvol_start_shallow_copy", 00:06:32.282 "bdev_lvol_grow_lvstore", 00:06:32.282 "bdev_lvol_get_lvols", 00:06:32.282 "bdev_lvol_get_lvstores", 00:06:32.282 "bdev_lvol_delete", 00:06:32.282 "bdev_lvol_set_read_only", 00:06:32.282 "bdev_lvol_resize", 00:06:32.282 "bdev_lvol_decouple_parent", 00:06:32.282 "bdev_lvol_inflate", 00:06:32.282 "bdev_lvol_rename", 00:06:32.282 "bdev_lvol_clone_bdev", 00:06:32.282 "bdev_lvol_clone", 00:06:32.282 "bdev_lvol_snapshot", 00:06:32.282 "bdev_lvol_create", 00:06:32.282 "bdev_lvol_delete_lvstore", 00:06:32.282 "bdev_lvol_rename_lvstore", 00:06:32.282 "bdev_lvol_create_lvstore", 00:06:32.282 "bdev_raid_set_options", 00:06:32.282 "bdev_raid_remove_base_bdev", 00:06:32.282 "bdev_raid_add_base_bdev", 00:06:32.282 "bdev_raid_delete", 00:06:32.282 "bdev_raid_create", 00:06:32.282 "bdev_raid_get_bdevs", 00:06:32.282 "bdev_error_inject_error", 00:06:32.282 "bdev_error_delete", 00:06:32.282 "bdev_error_create", 00:06:32.282 "bdev_split_delete", 00:06:32.282 "bdev_split_create", 00:06:32.282 "bdev_delay_delete", 00:06:32.282 "bdev_delay_create", 00:06:32.282 "bdev_delay_update_latency", 00:06:32.282 "bdev_zone_block_delete", 00:06:32.282 "bdev_zone_block_create", 00:06:32.282 "blobfs_create", 00:06:32.282 "blobfs_detect", 00:06:32.282 "blobfs_set_cache_size", 00:06:32.282 "bdev_aio_delete", 00:06:32.282 "bdev_aio_rescan", 00:06:32.282 "bdev_aio_create", 00:06:32.282 "bdev_ftl_set_property", 00:06:32.282 "bdev_ftl_get_properties", 00:06:32.282 "bdev_ftl_get_stats", 00:06:32.282 "bdev_ftl_unmap", 00:06:32.282 "bdev_ftl_unload", 00:06:32.282 "bdev_ftl_delete", 00:06:32.282 "bdev_ftl_load", 00:06:32.282 "bdev_ftl_create", 00:06:32.282 "bdev_virtio_attach_controller", 00:06:32.282 "bdev_virtio_scsi_get_devices", 00:06:32.282 "bdev_virtio_detach_controller", 00:06:32.282 "bdev_virtio_blk_set_hotplug", 00:06:32.282 "bdev_iscsi_delete", 00:06:32.282 "bdev_iscsi_create", 00:06:32.282 "bdev_iscsi_set_options", 00:06:32.282 "accel_error_inject_error", 00:06:32.283 "ioat_scan_accel_module", 00:06:32.283 "dsa_scan_accel_module", 00:06:32.283 "iaa_scan_accel_module", 00:06:32.283 "vfu_virtio_create_scsi_endpoint", 00:06:32.283 "vfu_virtio_scsi_remove_target", 00:06:32.283 "vfu_virtio_scsi_add_target", 00:06:32.283 "vfu_virtio_create_blk_endpoint", 00:06:32.283 "vfu_virtio_delete_endpoint", 00:06:32.283 "keyring_file_remove_key", 00:06:32.283 "keyring_file_add_key", 00:06:32.283 "iscsi_get_histogram", 00:06:32.283 "iscsi_enable_histogram", 00:06:32.283 "iscsi_set_options", 00:06:32.283 "iscsi_get_auth_groups", 00:06:32.283 "iscsi_auth_group_remove_secret", 00:06:32.283 "iscsi_auth_group_add_secret", 00:06:32.283 "iscsi_delete_auth_group", 00:06:32.283 "iscsi_create_auth_group", 00:06:32.283 "iscsi_set_discovery_auth", 00:06:32.283 "iscsi_get_options", 00:06:32.283 "iscsi_target_node_request_logout", 00:06:32.283 "iscsi_target_node_set_redirect", 00:06:32.283 "iscsi_target_node_set_auth", 00:06:32.283 "iscsi_target_node_add_lun", 00:06:32.283 "iscsi_get_stats", 00:06:32.283 "iscsi_get_connections", 00:06:32.283 "iscsi_portal_group_set_auth", 00:06:32.283 "iscsi_start_portal_group", 00:06:32.283 "iscsi_delete_portal_group", 00:06:32.283 "iscsi_create_portal_group", 00:06:32.283 "iscsi_get_portal_groups", 00:06:32.283 "iscsi_delete_target_node", 00:06:32.283 "iscsi_target_node_remove_pg_ig_maps", 00:06:32.283 "iscsi_target_node_add_pg_ig_maps", 00:06:32.283 "iscsi_create_target_node", 00:06:32.283 "iscsi_get_target_nodes", 00:06:32.283 "iscsi_delete_initiator_group", 00:06:32.283 "iscsi_initiator_group_remove_initiators", 00:06:32.283 "iscsi_initiator_group_add_initiators", 00:06:32.283 "iscsi_create_initiator_group", 00:06:32.283 "iscsi_get_initiator_groups", 00:06:32.283 "nvmf_set_crdt", 00:06:32.283 "nvmf_set_config", 00:06:32.283 "nvmf_set_max_subsystems", 00:06:32.283 "nvmf_subsystem_get_listeners", 00:06:32.283 "nvmf_subsystem_get_qpairs", 00:06:32.283 "nvmf_subsystem_get_controllers", 00:06:32.283 "nvmf_get_stats", 00:06:32.283 "nvmf_get_transports", 00:06:32.283 "nvmf_create_transport", 00:06:32.283 "nvmf_get_targets", 00:06:32.283 "nvmf_delete_target", 00:06:32.283 "nvmf_create_target", 00:06:32.283 "nvmf_subsystem_allow_any_host", 00:06:32.283 "nvmf_subsystem_remove_host", 00:06:32.283 "nvmf_subsystem_add_host", 00:06:32.283 "nvmf_ns_remove_host", 00:06:32.283 "nvmf_ns_add_host", 00:06:32.283 "nvmf_subsystem_remove_ns", 00:06:32.283 "nvmf_subsystem_add_ns", 00:06:32.283 "nvmf_subsystem_listener_set_ana_state", 00:06:32.283 "nvmf_discovery_get_referrals", 00:06:32.283 "nvmf_discovery_remove_referral", 00:06:32.283 "nvmf_discovery_add_referral", 00:06:32.283 "nvmf_subsystem_remove_listener", 00:06:32.283 "nvmf_subsystem_add_listener", 00:06:32.283 "nvmf_delete_subsystem", 00:06:32.283 "nvmf_create_subsystem", 00:06:32.283 "nvmf_get_subsystems", 00:06:32.283 "env_dpdk_get_mem_stats", 00:06:32.283 "nbd_get_disks", 00:06:32.283 "nbd_stop_disk", 00:06:32.283 "nbd_start_disk", 00:06:32.283 "ublk_recover_disk", 00:06:32.283 "ublk_get_disks", 00:06:32.283 "ublk_stop_disk", 00:06:32.283 "ublk_start_disk", 00:06:32.283 "ublk_destroy_target", 00:06:32.283 "ublk_create_target", 00:06:32.283 "virtio_blk_create_transport", 00:06:32.283 "virtio_blk_get_transports", 00:06:32.283 "vhost_controller_set_coalescing", 00:06:32.283 "vhost_get_controllers", 00:06:32.283 "vhost_delete_controller", 00:06:32.283 "vhost_create_blk_controller", 00:06:32.283 "vhost_scsi_controller_remove_target", 00:06:32.283 "vhost_scsi_controller_add_target", 00:06:32.283 "vhost_start_scsi_controller", 00:06:32.283 "vhost_create_scsi_controller", 00:06:32.283 "thread_set_cpumask", 00:06:32.283 "framework_get_scheduler", 00:06:32.283 "framework_set_scheduler", 00:06:32.283 "framework_get_reactors", 00:06:32.283 "thread_get_io_channels", 00:06:32.283 "thread_get_pollers", 00:06:32.283 "thread_get_stats", 00:06:32.283 "framework_monitor_context_switch", 00:06:32.283 "spdk_kill_instance", 00:06:32.283 "log_enable_timestamps", 00:06:32.283 "log_get_flags", 00:06:32.283 "log_clear_flag", 00:06:32.283 "log_set_flag", 00:06:32.283 "log_get_level", 00:06:32.283 "log_set_level", 00:06:32.283 "log_get_print_level", 00:06:32.283 "log_set_print_level", 00:06:32.283 "framework_enable_cpumask_locks", 00:06:32.283 "framework_disable_cpumask_locks", 00:06:32.283 "framework_wait_init", 00:06:32.283 "framework_start_init", 00:06:32.283 "scsi_get_devices", 00:06:32.283 "bdev_get_histogram", 00:06:32.283 "bdev_enable_histogram", 00:06:32.283 "bdev_set_qos_limit", 00:06:32.283 "bdev_set_qd_sampling_period", 00:06:32.283 "bdev_get_bdevs", 00:06:32.283 "bdev_reset_iostat", 00:06:32.283 "bdev_get_iostat", 00:06:32.283 "bdev_examine", 00:06:32.283 "bdev_wait_for_examine", 00:06:32.283 "bdev_set_options", 00:06:32.283 "notify_get_notifications", 00:06:32.283 "notify_get_types", 00:06:32.283 "accel_get_stats", 00:06:32.283 "accel_set_options", 00:06:32.283 "accel_set_driver", 00:06:32.283 "accel_crypto_key_destroy", 00:06:32.283 "accel_crypto_keys_get", 00:06:32.283 "accel_crypto_key_create", 00:06:32.283 "accel_assign_opc", 00:06:32.283 "accel_get_module_info", 00:06:32.283 "accel_get_opc_assignments", 00:06:32.283 "vmd_rescan", 00:06:32.283 "vmd_remove_device", 00:06:32.283 "vmd_enable", 00:06:32.283 "sock_get_default_impl", 00:06:32.283 "sock_set_default_impl", 00:06:32.283 "sock_impl_set_options", 00:06:32.283 "sock_impl_get_options", 00:06:32.283 "iobuf_get_stats", 00:06:32.283 "iobuf_set_options", 00:06:32.283 "keyring_get_keys", 00:06:32.283 "framework_get_pci_devices", 00:06:32.283 "framework_get_config", 00:06:32.283 "framework_get_subsystems", 00:06:32.283 "vfu_tgt_set_base_path", 00:06:32.283 "trace_get_info", 00:06:32.283 "trace_get_tpoint_group_mask", 00:06:32.283 "trace_disable_tpoint_group", 00:06:32.283 "trace_enable_tpoint_group", 00:06:32.283 "trace_clear_tpoint_mask", 00:06:32.283 "trace_set_tpoint_mask", 00:06:32.283 "spdk_get_version", 00:06:32.283 "rpc_get_methods" 00:06:32.283 ] 00:06:32.283 00:43:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.283 00:43:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:32.283 00:43:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3931758 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3931758 ']' 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3931758 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3931758 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3931758' 00:06:32.283 killing process with pid 3931758 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3931758 00:06:32.283 00:43:19 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3931758 00:06:32.543 00:06:32.543 real 0m1.194s 00:06:32.543 user 0m2.146s 00:06:32.543 sys 0m0.425s 00:06:32.543 00:43:19 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.543 00:43:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.543 ************************************ 00:06:32.543 END TEST spdkcli_tcp 00:06:32.543 ************************************ 00:06:32.543 00:43:19 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.543 00:43:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.543 00:43:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.543 00:43:19 -- common/autotest_common.sh@10 -- # set +x 00:06:32.822 ************************************ 00:06:32.822 START TEST dpdk_mem_utility 00:06:32.822 ************************************ 00:06:32.822 00:43:19 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.822 * Looking for test storage... 00:06:32.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:32.822 00:43:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:32.822 00:43:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3931928 00:06:32.822 00:43:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3931928 00:06:32.822 00:43:19 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3931928 ']' 00:06:32.822 00:43:19 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.822 00:43:19 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.822 00:43:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.822 00:43:19 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.822 00:43:19 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.822 00:43:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:32.822 [2024-05-15 00:43:19.729283] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:32.822 [2024-05-15 00:43:19.729393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931928 ] 00:06:32.822 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.822 [2024-05-15 00:43:19.788785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.126 [2024-05-15 00:43:19.907533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.126 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.126 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:33.126 00:43:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:33.126 00:43:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:33.126 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.126 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.126 { 00:06:33.126 "filename": "/tmp/spdk_mem_dump.txt" 00:06:33.126 } 00:06:33.126 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.126 00:43:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.420 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:33.420 1 heaps totaling size 814.000000 MiB 00:06:33.420 size: 814.000000 MiB heap id: 0 00:06:33.420 end heaps---------- 00:06:33.420 8 mempools totaling size 598.116089 MiB 00:06:33.420 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:33.420 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:33.420 size: 84.521057 MiB name: bdev_io_3931928 00:06:33.420 size: 51.011292 MiB name: evtpool_3931928 00:06:33.420 size: 50.003479 MiB name: msgpool_3931928 00:06:33.420 size: 21.763794 MiB name: PDU_Pool 00:06:33.420 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:33.420 size: 0.026123 MiB name: Session_Pool 00:06:33.420 end mempools------- 00:06:33.420 6 memzones totaling size 4.142822 MiB 00:06:33.420 size: 1.000366 MiB name: RG_ring_0_3931928 00:06:33.420 size: 1.000366 MiB name: RG_ring_1_3931928 00:06:33.420 size: 1.000366 MiB name: RG_ring_4_3931928 00:06:33.420 size: 1.000366 MiB name: RG_ring_5_3931928 00:06:33.420 size: 0.125366 MiB name: RG_ring_2_3931928 00:06:33.420 size: 0.015991 MiB name: RG_ring_3_3931928 00:06:33.420 end memzones------- 00:06:33.420 00:43:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:33.420 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:33.420 list of free elements. size: 12.519348 MiB 00:06:33.420 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:33.420 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:33.420 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:33.420 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:33.420 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:33.420 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:33.420 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:33.420 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:33.420 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:33.420 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:33.420 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:33.420 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:33.420 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:33.420 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:33.420 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:33.420 list of standard malloc elements. size: 199.218079 MiB 00:06:33.420 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:33.420 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:33.420 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:33.420 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:33.420 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:33.420 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:33.420 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:33.420 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:33.420 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:33.420 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:33.420 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:33.420 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:33.420 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:33.420 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:33.420 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:33.420 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:33.420 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:33.420 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:33.420 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:33.420 list of memzone associated elements. size: 602.262573 MiB 00:06:33.420 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:33.420 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:33.420 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:33.420 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:33.420 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:33.420 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3931928_0 00:06:33.421 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:33.421 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3931928_0 00:06:33.421 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:33.421 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3931928_0 00:06:33.421 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:33.421 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:33.421 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:33.421 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:33.421 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:33.421 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3931928 00:06:33.421 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:33.421 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3931928 00:06:33.421 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:33.421 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3931928 00:06:33.421 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:33.421 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:33.421 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:33.421 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:33.421 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:33.421 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:33.421 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:33.421 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:33.421 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:33.421 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3931928 00:06:33.421 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:33.421 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3931928 00:06:33.421 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:33.421 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3931928 00:06:33.421 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:33.421 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3931928 00:06:33.421 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:33.421 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3931928 00:06:33.421 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:33.421 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:33.421 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:33.421 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:33.421 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:33.421 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:33.421 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:33.421 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3931928 00:06:33.421 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:33.421 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:33.421 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:33.421 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:33.421 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:33.421 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3931928 00:06:33.421 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:33.421 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:33.421 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:33.421 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3931928 00:06:33.421 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:33.421 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3931928 00:06:33.421 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:33.421 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:33.421 00:43:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:33.421 00:43:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3931928 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3931928 ']' 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3931928 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3931928 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3931928' 00:06:33.421 killing process with pid 3931928 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3931928 00:06:33.421 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3931928 00:06:33.698 00:06:33.698 real 0m1.007s 00:06:33.698 user 0m1.058s 00:06:33.698 sys 0m0.375s 00:06:33.698 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.698 00:43:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.698 ************************************ 00:06:33.698 END TEST dpdk_mem_utility 00:06:33.698 ************************************ 00:06:33.698 00:43:20 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:33.698 00:43:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.698 00:43:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.698 00:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:33.698 ************************************ 00:06:33.698 START TEST event 00:06:33.698 ************************************ 00:06:33.698 00:43:20 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:33.698 * Looking for test storage... 00:06:33.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:33.698 00:43:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:33.698 00:43:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:33.698 00:43:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:33.698 00:43:20 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:33.698 00:43:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.698 00:43:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.956 ************************************ 00:06:33.956 START TEST event_perf 00:06:33.956 ************************************ 00:06:33.956 00:43:20 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:33.956 Running I/O for 1 seconds...[2024-05-15 00:43:20.784328] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:33.956 [2024-05-15 00:43:20.784401] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932097 ] 00:06:33.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.956 [2024-05-15 00:43:20.843354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.956 [2024-05-15 00:43:20.962657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.956 [2024-05-15 00:43:20.962707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.956 [2024-05-15 00:43:20.962753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.956 [2024-05-15 00:43:20.962756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.328 Running I/O for 1 seconds... 00:06:35.328 lcore 0: 232559 00:06:35.328 lcore 1: 232558 00:06:35.328 lcore 2: 232559 00:06:35.328 lcore 3: 232559 00:06:35.328 done. 00:06:35.328 00:06:35.328 real 0m1.302s 00:06:35.328 user 0m4.219s 00:06:35.328 sys 0m0.075s 00:06:35.328 00:43:22 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.328 00:43:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.328 ************************************ 00:06:35.328 END TEST event_perf 00:06:35.328 ************************************ 00:06:35.328 00:43:22 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.328 00:43:22 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:35.328 00:43:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.328 00:43:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.328 ************************************ 00:06:35.328 START TEST event_reactor 00:06:35.328 ************************************ 00:06:35.328 00:43:22 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.328 [2024-05-15 00:43:22.151655] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:35.328 [2024-05-15 00:43:22.151727] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932226 ] 00:06:35.328 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.328 [2024-05-15 00:43:22.211727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.328 [2024-05-15 00:43:22.330951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.701 test_start 00:06:36.701 oneshot 00:06:36.701 tick 100 00:06:36.701 tick 100 00:06:36.701 tick 250 00:06:36.701 tick 100 00:06:36.701 tick 100 00:06:36.701 tick 100 00:06:36.701 tick 250 00:06:36.701 tick 500 00:06:36.701 tick 100 00:06:36.701 tick 100 00:06:36.701 tick 250 00:06:36.701 tick 100 00:06:36.701 tick 100 00:06:36.701 test_end 00:06:36.701 00:06:36.701 real 0m1.304s 00:06:36.701 user 0m1.217s 00:06:36.701 sys 0m0.080s 00:06:36.701 00:43:23 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.701 00:43:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:36.701 ************************************ 00:06:36.701 END TEST event_reactor 00:06:36.701 ************************************ 00:06:36.701 00:43:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:36.701 00:43:23 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:36.701 00:43:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.701 00:43:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.701 ************************************ 00:06:36.701 START TEST event_reactor_perf 00:06:36.701 ************************************ 00:06:36.701 00:43:23 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:36.701 [2024-05-15 00:43:23.521484] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:36.701 [2024-05-15 00:43:23.521556] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932349 ] 00:06:36.701 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.701 [2024-05-15 00:43:23.580882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.701 [2024-05-15 00:43:23.700864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.073 test_start 00:06:38.073 test_end 00:06:38.073 Performance: 323977 events per second 00:06:38.073 00:06:38.073 real 0m1.305s 00:06:38.073 user 0m1.228s 00:06:38.073 sys 0m0.071s 00:06:38.073 00:43:24 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.073 00:43:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.073 ************************************ 00:06:38.073 END TEST event_reactor_perf 00:06:38.073 ************************************ 00:06:38.073 00:43:24 event -- event/event.sh@49 -- # uname -s 00:06:38.073 00:43:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:38.073 00:43:24 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:38.073 00:43:24 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.073 00:43:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.073 00:43:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.073 ************************************ 00:06:38.073 START TEST event_scheduler 00:06:38.073 ************************************ 00:06:38.073 00:43:24 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:38.073 * Looking for test storage... 00:06:38.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:38.073 00:43:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:38.073 00:43:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3932588 00:06:38.073 00:43:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.073 00:43:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:38.073 00:43:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3932588 00:06:38.073 00:43:24 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3932588 ']' 00:06:38.073 00:43:24 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.073 00:43:24 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.073 00:43:24 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.073 00:43:24 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.073 00:43:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.073 [2024-05-15 00:43:24.975070] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:38.073 [2024-05-15 00:43:24.975163] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932588 ] 00:06:38.073 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.073 [2024-05-15 00:43:25.039828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.331 [2024-05-15 00:43:25.160179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.331 [2024-05-15 00:43:25.160232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.331 [2024-05-15 00:43:25.160318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.331 [2024-05-15 00:43:25.160284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:38.331 00:43:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.331 POWER: Env isn't set yet! 00:06:38.331 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:38.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:38.331 POWER: Cannot get available frequencies of lcore 0 00:06:38.331 POWER: Attempting to initialise PSTAT power management... 00:06:38.331 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:38.331 POWER: Initialized successfully for lcore 0 power management 00:06:38.331 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:38.331 POWER: Initialized successfully for lcore 1 power management 00:06:38.331 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:38.331 POWER: Initialized successfully for lcore 2 power management 00:06:38.331 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:38.331 POWER: Initialized successfully for lcore 3 power management 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.331 00:43:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.331 [2024-05-15 00:43:25.355382] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.331 00:43:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.331 00:43:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.589 ************************************ 00:06:38.589 START TEST scheduler_create_thread 00:06:38.589 ************************************ 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.589 2 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.589 3 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.589 4 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.589 5 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.589 6 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.589 7 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.589 8 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.589 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.590 9 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.590 10 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.590 00:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.962 00:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.962 00:43:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:39.962 00:43:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:39.962 00:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.962 00:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.333 00:43:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.333 00:06:41.333 real 0m2.618s 00:06:41.333 user 0m0.015s 00:06:41.333 sys 0m0.002s 00:06:41.333 00:43:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.333 00:43:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.333 ************************************ 00:06:41.333 END TEST scheduler_create_thread 00:06:41.333 ************************************ 00:06:41.333 00:43:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:41.333 00:43:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3932588 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3932588 ']' 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3932588 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3932588 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3932588' 00:06:41.333 killing process with pid 3932588 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3932588 00:06:41.333 00:43:28 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3932588 00:06:41.592 [2024-05-15 00:43:28.490787] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:41.592 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:41.592 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:41.592 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:41.592 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:41.592 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:41.592 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:41.592 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:41.592 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:41.850 00:06:41.850 real 0m3.840s 00:06:41.850 user 0m5.879s 00:06:41.850 sys 0m0.307s 00:06:41.850 00:43:28 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.850 00:43:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.850 ************************************ 00:06:41.850 END TEST event_scheduler 00:06:41.850 ************************************ 00:06:41.850 00:43:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:41.850 00:43:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:41.850 00:43:28 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.850 00:43:28 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.850 00:43:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.850 ************************************ 00:06:41.850 START TEST app_repeat 00:06:41.850 ************************************ 00:06:41.850 00:43:28 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3932939 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3932939' 00:06:41.850 Process app_repeat pid: 3932939 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:41.850 spdk_app_start Round 0 00:06:41.850 00:43:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3932939 /var/tmp/spdk-nbd.sock 00:06:41.850 00:43:28 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3932939 ']' 00:06:41.850 00:43:28 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.850 00:43:28 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.850 00:43:28 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.850 00:43:28 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.850 00:43:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.850 [2024-05-15 00:43:28.809132] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:06:41.850 [2024-05-15 00:43:28.809205] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932939 ] 00:06:41.851 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.851 [2024-05-15 00:43:28.867336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.108 [2024-05-15 00:43:28.984731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.108 [2024-05-15 00:43:28.984736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.108 00:43:29 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.108 00:43:29 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:42.108 00:43:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.366 Malloc0 00:06:42.366 00:43:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.930 Malloc1 00:06:42.930 00:43:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.930 00:43:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.188 /dev/nbd0 00:06:43.188 00:43:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.188 00:43:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.188 00:43:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:43.188 00:43:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:43.188 00:43:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:43.188 00:43:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:43.188 00:43:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:43.188 00:43:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:43.188 00:43:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:43.189 00:43:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:43.189 00:43:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.189 1+0 records in 00:06:43.189 1+0 records out 00:06:43.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178975 s, 22.9 MB/s 00:06:43.189 00:43:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.189 00:43:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:43.189 00:43:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.189 00:43:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:43.189 00:43:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:43.189 00:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.189 00:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.189 00:43:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.446 /dev/nbd1 00:06:43.446 00:43:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.447 00:43:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.447 1+0 records in 00:06:43.447 1+0 records out 00:06:43.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187967 s, 21.8 MB/s 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:43.447 00:43:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:43.447 00:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.447 00:43:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.447 00:43:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.447 00:43:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.447 00:43:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.704 { 00:06:43.704 "nbd_device": "/dev/nbd0", 00:06:43.704 "bdev_name": "Malloc0" 00:06:43.704 }, 00:06:43.704 { 00:06:43.704 "nbd_device": "/dev/nbd1", 00:06:43.704 "bdev_name": "Malloc1" 00:06:43.704 } 00:06:43.704 ]' 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.704 { 00:06:43.704 "nbd_device": "/dev/nbd0", 00:06:43.704 "bdev_name": "Malloc0" 00:06:43.704 }, 00:06:43.704 { 00:06:43.704 "nbd_device": "/dev/nbd1", 00:06:43.704 "bdev_name": "Malloc1" 00:06:43.704 } 00:06:43.704 ]' 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.704 /dev/nbd1' 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.704 /dev/nbd1' 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.704 256+0 records in 00:06:43.704 256+0 records out 00:06:43.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504172 s, 208 MB/s 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.704 256+0 records in 00:06:43.704 256+0 records out 00:06:43.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252676 s, 41.5 MB/s 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.704 00:43:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.962 256+0 records in 00:06:43.962 256+0 records out 00:06:43.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268699 s, 39.0 MB/s 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.962 00:43:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.220 00:43:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.477 00:43:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.735 00:43:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.735 00:43:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:44.993 00:43:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:45.251 [2024-05-15 00:43:32.236779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.508 [2024-05-15 00:43:32.353456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.508 [2024-05-15 00:43:32.353456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.508 [2024-05-15 00:43:32.402392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.508 [2024-05-15 00:43:32.402465] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.033 00:43:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.033 00:43:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:48.033 spdk_app_start Round 1 00:06:48.033 00:43:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3932939 /var/tmp/spdk-nbd.sock 00:06:48.033 00:43:35 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3932939 ']' 00:06:48.033 00:43:35 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.033 00:43:35 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.034 00:43:35 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.034 00:43:35 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.034 00:43:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.306 00:43:35 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:48.306 00:43:35 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:48.306 00:43:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.870 Malloc0 00:06:48.870 00:43:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.128 Malloc1 00:06:49.128 00:43:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.128 00:43:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.386 /dev/nbd0 00:06:49.386 00:43:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.386 00:43:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.386 1+0 records in 00:06:49.386 1+0 records out 00:06:49.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197506 s, 20.7 MB/s 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:49.386 00:43:36 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:49.386 00:43:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.386 00:43:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.386 00:43:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.644 /dev/nbd1 00:06:49.644 00:43:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.644 00:43:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.644 1+0 records in 00:06:49.644 1+0 records out 00:06:49.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183708 s, 22.3 MB/s 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:49.644 00:43:36 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:49.644 00:43:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.644 00:43:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.644 00:43:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.644 00:43:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.644 00:43:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.902 { 00:06:49.902 "nbd_device": "/dev/nbd0", 00:06:49.902 "bdev_name": "Malloc0" 00:06:49.902 }, 00:06:49.902 { 00:06:49.902 "nbd_device": "/dev/nbd1", 00:06:49.902 "bdev_name": "Malloc1" 00:06:49.902 } 00:06:49.902 ]' 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.902 { 00:06:49.902 "nbd_device": "/dev/nbd0", 00:06:49.902 "bdev_name": "Malloc0" 00:06:49.902 }, 00:06:49.902 { 00:06:49.902 "nbd_device": "/dev/nbd1", 00:06:49.902 "bdev_name": "Malloc1" 00:06:49.902 } 00:06:49.902 ]' 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.902 /dev/nbd1' 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.902 /dev/nbd1' 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.902 256+0 records in 00:06:49.902 256+0 records out 00:06:49.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593369 s, 177 MB/s 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.902 00:43:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.159 256+0 records in 00:06:50.159 256+0 records out 00:06:50.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254965 s, 41.1 MB/s 00:06:50.159 00:43:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.159 00:43:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.159 256+0 records in 00:06:50.159 256+0 records out 00:06:50.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267903 s, 39.1 MB/s 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.159 00:43:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.160 00:43:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.160 00:43:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.160 00:43:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.160 00:43:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.160 00:43:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.160 00:43:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.160 00:43:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.417 00:43:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.675 00:43:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.932 00:43:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.932 00:43:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.498 00:43:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.498 [2024-05-15 00:43:38.470340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.756 [2024-05-15 00:43:38.588948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.756 [2024-05-15 00:43:38.588955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.756 [2024-05-15 00:43:38.640456] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.756 [2024-05-15 00:43:38.640532] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.282 00:43:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.282 00:43:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:54.282 spdk_app_start Round 2 00:06:54.282 00:43:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3932939 /var/tmp/spdk-nbd.sock 00:06:54.282 00:43:41 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3932939 ']' 00:06:54.282 00:43:41 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.282 00:43:41 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.282 00:43:41 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.282 00:43:41 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.282 00:43:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.539 00:43:41 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.539 00:43:41 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:54.539 00:43:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.104 Malloc0 00:06:55.104 00:43:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.362 Malloc1 00:06:55.362 00:43:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.362 00:43:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:55.620 /dev/nbd0 00:06:55.620 00:43:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.620 00:43:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.620 1+0 records in 00:06:55.620 1+0 records out 00:06:55.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176991 s, 23.1 MB/s 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:55.620 00:43:42 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:55.620 00:43:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.620 00:43:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.620 00:43:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.877 /dev/nbd1 00:06:55.877 00:43:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.877 00:43:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.877 1+0 records in 00:06:55.877 1+0 records out 00:06:55.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236019 s, 17.4 MB/s 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:55.877 00:43:42 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:55.877 00:43:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.877 00:43:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.877 00:43:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.877 00:43:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.877 00:43:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.135 { 00:06:56.135 "nbd_device": "/dev/nbd0", 00:06:56.135 "bdev_name": "Malloc0" 00:06:56.135 }, 00:06:56.135 { 00:06:56.135 "nbd_device": "/dev/nbd1", 00:06:56.135 "bdev_name": "Malloc1" 00:06:56.135 } 00:06:56.135 ]' 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.135 { 00:06:56.135 "nbd_device": "/dev/nbd0", 00:06:56.135 "bdev_name": "Malloc0" 00:06:56.135 }, 00:06:56.135 { 00:06:56.135 "nbd_device": "/dev/nbd1", 00:06:56.135 "bdev_name": "Malloc1" 00:06:56.135 } 00:06:56.135 ]' 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.135 /dev/nbd1' 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.135 /dev/nbd1' 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.135 256+0 records in 00:06:56.135 256+0 records out 00:06:56.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513728 s, 204 MB/s 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.135 00:43:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.393 256+0 records in 00:06:56.393 256+0 records out 00:06:56.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259115 s, 40.5 MB/s 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.393 256+0 records in 00:06:56.393 256+0 records out 00:06:56.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026576 s, 39.5 MB/s 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.393 00:43:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.650 00:43:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.907 00:43:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.164 00:43:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.164 00:43:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.729 00:43:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:57.729 [2024-05-15 00:43:44.694993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.987 [2024-05-15 00:43:44.813529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.987 [2024-05-15 00:43:44.813530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.987 [2024-05-15 00:43:44.864675] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:57.987 [2024-05-15 00:43:44.864752] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.568 00:43:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3932939 /var/tmp/spdk-nbd.sock 00:07:00.568 00:43:47 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3932939 ']' 00:07:00.568 00:43:47 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.568 00:43:47 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.568 00:43:47 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.568 00:43:47 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.568 00:43:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:00.826 00:43:47 event.app_repeat -- event/event.sh@39 -- # killprocess 3932939 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3932939 ']' 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3932939 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3932939 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3932939' 00:07:00.826 killing process with pid 3932939 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3932939 00:07:00.826 00:43:47 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3932939 00:07:01.085 spdk_app_start is called in Round 0. 00:07:01.085 Shutdown signal received, stop current app iteration 00:07:01.085 Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 reinitialization... 00:07:01.085 spdk_app_start is called in Round 1. 00:07:01.085 Shutdown signal received, stop current app iteration 00:07:01.085 Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 reinitialization... 00:07:01.085 spdk_app_start is called in Round 2. 00:07:01.085 Shutdown signal received, stop current app iteration 00:07:01.085 Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 reinitialization... 00:07:01.085 spdk_app_start is called in Round 3. 00:07:01.085 Shutdown signal received, stop current app iteration 00:07:01.085 00:43:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:01.085 00:43:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:01.085 00:07:01.085 real 0m19.233s 00:07:01.085 user 0m42.985s 00:07:01.085 sys 0m3.548s 00:07:01.085 00:43:48 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.085 00:43:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.085 ************************************ 00:07:01.085 END TEST app_repeat 00:07:01.085 ************************************ 00:07:01.085 00:43:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:01.085 00:43:48 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:01.085 00:43:48 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.085 00:43:48 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.085 00:43:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.085 ************************************ 00:07:01.085 START TEST cpu_locks 00:07:01.085 ************************************ 00:07:01.085 00:43:48 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:01.085 * Looking for test storage... 00:07:01.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:01.085 00:43:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:01.085 00:43:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:01.085 00:43:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:01.085 00:43:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:01.085 00:43:48 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.085 00:43:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.085 00:43:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.343 ************************************ 00:07:01.343 START TEST default_locks 00:07:01.343 ************************************ 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3934977 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3934977 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3934977 ']' 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.343 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.343 [2024-05-15 00:43:48.225789] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:01.343 [2024-05-15 00:43:48.225896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934977 ] 00:07:01.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.343 [2024-05-15 00:43:48.285599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.601 [2024-05-15 00:43:48.405648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.601 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.601 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:01.601 00:43:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3934977 00:07:01.601 00:43:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3934977 00:07:01.601 00:43:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.166 lslocks: write error 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3934977 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3934977 ']' 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3934977 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3934977 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3934977' 00:07:02.166 killing process with pid 3934977 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3934977 00:07:02.166 00:43:48 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3934977 00:07:02.424 00:43:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3934977 00:07:02.424 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:02.424 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3934977 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3934977 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3934977 ']' 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3934977) - No such process 00:07:02.425 ERROR: process (pid: 3934977) is no longer running 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:02.425 00:07:02.425 real 0m1.116s 00:07:02.425 user 0m1.119s 00:07:02.425 sys 0m0.520s 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.425 00:43:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.425 ************************************ 00:07:02.425 END TEST default_locks 00:07:02.425 ************************************ 00:07:02.425 00:43:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:02.425 00:43:49 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.425 00:43:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.425 00:43:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.425 ************************************ 00:07:02.425 START TEST default_locks_via_rpc 00:07:02.425 ************************************ 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3935107 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3935107 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3935107 ']' 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.425 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.425 [2024-05-15 00:43:49.402295] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:02.425 [2024-05-15 00:43:49.402394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935107 ] 00:07:02.425 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.425 [2024-05-15 00:43:49.463268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.683 [2024-05-15 00:43:49.583279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3935107 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3935107 00:07:02.941 00:43:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3935107 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3935107 ']' 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3935107 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3935107 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3935107' 00:07:03.506 killing process with pid 3935107 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3935107 00:07:03.506 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3935107 00:07:03.764 00:07:03.764 real 0m1.263s 00:07:03.764 user 0m1.279s 00:07:03.764 sys 0m0.526s 00:07:03.764 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.764 00:43:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.764 ************************************ 00:07:03.764 END TEST default_locks_via_rpc 00:07:03.764 ************************************ 00:07:03.764 00:43:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:03.764 00:43:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.764 00:43:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.764 00:43:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.764 ************************************ 00:07:03.764 START TEST non_locking_app_on_locked_coremask 00:07:03.764 ************************************ 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3935237 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3935237 /var/tmp/spdk.sock 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3935237 ']' 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.764 00:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.764 [2024-05-15 00:43:50.728703] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:03.764 [2024-05-15 00:43:50.728798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935237 ] 00:07:03.764 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.764 [2024-05-15 00:43:50.788976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.022 [2024-05-15 00:43:50.908598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3935334 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3935334 /var/tmp/spdk2.sock 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3935334 ']' 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.280 00:43:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.280 [2024-05-15 00:43:51.181467] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:04.280 [2024-05-15 00:43:51.181568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935334 ] 00:07:04.280 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.280 [2024-05-15 00:43:51.272179] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.280 [2024-05-15 00:43:51.272231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.538 [2024-05-15 00:43:51.510757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.472 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:05.472 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:05.472 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3935237 00:07:05.472 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3935237 00:07:05.472 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.037 lslocks: write error 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3935237 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3935237 ']' 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3935237 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3935237 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3935237' 00:07:06.037 killing process with pid 3935237 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3935237 00:07:06.037 00:43:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3935237 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3935334 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3935334 ']' 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3935334 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3935334 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3935334' 00:07:06.611 killing process with pid 3935334 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3935334 00:07:06.611 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3935334 00:07:06.873 00:07:06.873 real 0m3.217s 00:07:06.873 user 0m3.571s 00:07:06.873 sys 0m1.048s 00:07:06.873 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.873 00:43:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.873 ************************************ 00:07:06.873 END TEST non_locking_app_on_locked_coremask 00:07:06.873 ************************************ 00:07:06.873 00:43:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:06.873 00:43:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.873 00:43:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.873 00:43:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.131 ************************************ 00:07:07.131 START TEST locking_app_on_unlocked_coremask 00:07:07.131 ************************************ 00:07:07.131 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:07.131 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3935575 00:07:07.131 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:07.131 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3935575 /var/tmp/spdk.sock 00:07:07.131 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3935575 ']' 00:07:07.131 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.131 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.131 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.132 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.132 00:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.132 [2024-05-15 00:43:54.009849] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:07.132 [2024-05-15 00:43:54.009956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935575 ] 00:07:07.132 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.132 [2024-05-15 00:43:54.071061] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.132 [2024-05-15 00:43:54.071113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.389 [2024-05-15 00:43:54.191277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3935672 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3935672 /var/tmp/spdk2.sock 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3935672 ']' 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.389 00:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.648 [2024-05-15 00:43:54.474609] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:07.648 [2024-05-15 00:43:54.474709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935672 ] 00:07:07.648 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.648 [2024-05-15 00:43:54.564280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.906 [2024-05-15 00:43:54.804392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.839 00:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:08.839 00:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:08.839 00:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3935672 00:07:08.839 00:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3935672 00:07:08.839 00:43:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.404 lslocks: write error 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3935575 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3935575 ']' 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3935575 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3935575 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3935575' 00:07:09.404 killing process with pid 3935575 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3935575 00:07:09.404 00:43:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3935575 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3935672 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3935672 ']' 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3935672 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3935672 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3935672' 00:07:10.339 killing process with pid 3935672 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3935672 00:07:10.339 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3935672 00:07:10.597 00:07:10.597 real 0m3.505s 00:07:10.597 user 0m3.874s 00:07:10.597 sys 0m1.067s 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.597 ************************************ 00:07:10.597 END TEST locking_app_on_unlocked_coremask 00:07:10.597 ************************************ 00:07:10.597 00:43:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:10.597 00:43:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.597 00:43:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.597 00:43:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.597 ************************************ 00:07:10.597 START TEST locking_app_on_locked_coremask 00:07:10.597 ************************************ 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3935923 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3935923 /var/tmp/spdk.sock 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3935923 ']' 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.597 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.597 [2024-05-15 00:43:57.569143] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:10.597 [2024-05-15 00:43:57.569238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935923 ] 00:07:10.597 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.597 [2024-05-15 00:43:57.628299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.855 [2024-05-15 00:43:57.745592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3936010 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3936010 /var/tmp/spdk2.sock 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3936010 /var/tmp/spdk2.sock 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3936010 /var/tmp/spdk2.sock 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3936010 ']' 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.114 00:43:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.114 [2024-05-15 00:43:58.028478] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:11.114 [2024-05-15 00:43:58.028580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936010 ] 00:07:11.114 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.115 [2024-05-15 00:43:58.119252] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3935923 has claimed it. 00:07:11.115 [2024-05-15 00:43:58.119324] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:12.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3936010) - No such process 00:07:12.050 ERROR: process (pid: 3936010) is no longer running 00:07:12.050 00:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:12.050 00:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:12.050 00:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:12.050 00:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:12.050 00:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:12.050 00:43:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:12.050 00:43:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3935923 00:07:12.050 00:43:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3935923 00:07:12.050 00:43:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.308 lslocks: write error 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3935923 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3935923 ']' 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3935923 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3935923 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3935923' 00:07:12.308 killing process with pid 3935923 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3935923 00:07:12.308 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3935923 00:07:12.567 00:07:12.567 real 0m1.952s 00:07:12.567 user 0m2.244s 00:07:12.567 sys 0m0.604s 00:07:12.567 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.567 00:43:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.567 ************************************ 00:07:12.567 END TEST locking_app_on_locked_coremask 00:07:12.567 ************************************ 00:07:12.567 00:43:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:12.567 00:43:59 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:12.567 00:43:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.567 00:43:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.567 ************************************ 00:07:12.567 START TEST locking_overlapped_coremask 00:07:12.567 ************************************ 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3936150 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3936150 /var/tmp/spdk.sock 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3936150 ']' 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.567 00:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.567 [2024-05-15 00:43:59.585458] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:12.567 [2024-05-15 00:43:59.585553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936150 ] 00:07:12.567 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.825 [2024-05-15 00:43:59.646294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.825 [2024-05-15 00:43:59.766664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.825 [2024-05-15 00:43:59.768954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.825 [2024-05-15 00:43:59.768995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.083 00:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.083 00:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3936246 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3936246 /var/tmp/spdk2.sock 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3936246 /var/tmp/spdk2.sock 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3936246 /var/tmp/spdk2.sock 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3936246 ']' 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.083 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.083 [2024-05-15 00:44:00.057533] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:13.083 [2024-05-15 00:44:00.057642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936246 ] 00:07:13.083 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.341 [2024-05-15 00:44:00.148476] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3936150 has claimed it. 00:07:13.341 [2024-05-15 00:44:00.148538] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:13.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3936246) - No such process 00:07:13.906 ERROR: process (pid: 3936246) is no longer running 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3936150 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3936150 ']' 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3936150 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3936150 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3936150' 00:07:13.906 killing process with pid 3936150 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3936150 00:07:13.906 00:44:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3936150 00:07:14.165 00:07:14.165 real 0m1.617s 00:07:14.165 user 0m4.360s 00:07:14.165 sys 0m0.412s 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.165 ************************************ 00:07:14.165 END TEST locking_overlapped_coremask 00:07:14.165 ************************************ 00:07:14.165 00:44:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:14.165 00:44:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.165 00:44:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.165 00:44:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.165 ************************************ 00:07:14.165 START TEST locking_overlapped_coremask_via_rpc 00:07:14.165 ************************************ 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3936376 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3936376 /var/tmp/spdk.sock 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3936376 ']' 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.165 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.423 [2024-05-15 00:44:01.267288] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:14.423 [2024-05-15 00:44:01.267388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936376 ] 00:07:14.423 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.423 [2024-05-15 00:44:01.328657] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.423 [2024-05-15 00:44:01.328707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.423 [2024-05-15 00:44:01.451959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.423 [2024-05-15 00:44:01.452025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.423 [2024-05-15 00:44:01.452061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3936387 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3936387 /var/tmp/spdk2.sock 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3936387 ']' 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.681 00:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.939 [2024-05-15 00:44:01.739838] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:14.939 [2024-05-15 00:44:01.739951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936387 ] 00:07:14.939 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.939 [2024-05-15 00:44:01.828624] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.939 [2024-05-15 00:44:01.828670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.197 [2024-05-15 00:44:02.068070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.197 [2024-05-15 00:44:02.071974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.197 [2024-05-15 00:44:02.071977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.762 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.763 [2024-05-15 00:44:02.788038] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3936376 has claimed it. 00:07:15.763 request: 00:07:15.763 { 00:07:15.763 "method": "framework_enable_cpumask_locks", 00:07:15.763 "req_id": 1 00:07:15.763 } 00:07:15.763 Got JSON-RPC error response 00:07:15.763 response: 00:07:15.763 { 00:07:15.763 "code": -32603, 00:07:15.763 "message": "Failed to claim CPU core: 2" 00:07:15.763 } 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3936376 /var/tmp/spdk.sock 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3936376 ']' 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.763 00:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.328 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.328 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:16.328 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3936387 /var/tmp/spdk2.sock 00:07:16.328 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3936387 ']' 00:07:16.328 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.328 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:16.328 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.328 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:16.328 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.585 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.585 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:16.585 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:16.585 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.585 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.585 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.585 00:07:16.585 real 0m2.211s 00:07:16.585 user 0m1.273s 00:07:16.585 sys 0m0.191s 00:07:16.585 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.585 00:44:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.585 ************************************ 00:07:16.585 END TEST locking_overlapped_coremask_via_rpc 00:07:16.585 ************************************ 00:07:16.585 00:44:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:16.585 00:44:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3936376 ]] 00:07:16.585 00:44:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3936376 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3936376 ']' 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3936376 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3936376 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3936376' 00:07:16.585 killing process with pid 3936376 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3936376 00:07:16.585 00:44:03 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3936376 00:07:16.842 00:44:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3936387 ]] 00:07:16.842 00:44:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3936387 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3936387 ']' 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3936387 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3936387 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3936387' 00:07:16.842 killing process with pid 3936387 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3936387 00:07:16.842 00:44:03 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3936387 00:07:17.409 00:44:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:17.409 00:44:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:17.409 00:44:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3936376 ]] 00:07:17.409 00:44:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3936376 00:07:17.409 00:44:04 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3936376 ']' 00:07:17.409 00:44:04 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3936376 00:07:17.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3936376) - No such process 00:07:17.409 00:44:04 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3936376 is not found' 00:07:17.409 Process with pid 3936376 is not found 00:07:17.409 00:44:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3936387 ]] 00:07:17.409 00:44:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3936387 00:07:17.409 00:44:04 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3936387 ']' 00:07:17.409 00:44:04 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3936387 00:07:17.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3936387) - No such process 00:07:17.409 00:44:04 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3936387 is not found' 00:07:17.409 Process with pid 3936387 is not found 00:07:17.409 00:44:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:17.409 00:07:17.409 real 0m16.090s 00:07:17.409 user 0m28.871s 00:07:17.409 sys 0m5.259s 00:07:17.409 00:44:04 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.409 00:44:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.410 ************************************ 00:07:17.410 END TEST cpu_locks 00:07:17.410 ************************************ 00:07:17.410 00:07:17.410 real 0m43.507s 00:07:17.410 user 1m24.550s 00:07:17.410 sys 0m9.621s 00:07:17.410 00:44:04 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.410 00:44:04 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.410 ************************************ 00:07:17.410 END TEST event 00:07:17.410 ************************************ 00:07:17.410 00:44:04 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:17.410 00:44:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.410 00:44:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.410 00:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:17.410 ************************************ 00:07:17.410 START TEST thread 00:07:17.410 ************************************ 00:07:17.410 00:44:04 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:17.410 * Looking for test storage... 00:07:17.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:17.410 00:44:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:17.410 00:44:04 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:17.410 00:44:04 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.410 00:44:04 thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.410 ************************************ 00:07:17.410 START TEST thread_poller_perf 00:07:17.410 ************************************ 00:07:17.410 00:44:04 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:17.410 [2024-05-15 00:44:04.342118] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:17.410 [2024-05-15 00:44:04.342194] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936784 ] 00:07:17.410 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.410 [2024-05-15 00:44:04.400305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.668 [2024-05-15 00:44:04.517024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.668 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:18.603 ====================================== 00:07:18.603 busy:2711435224 (cyc) 00:07:18.603 total_run_count: 261000 00:07:18.603 tsc_hz: 2700000000 (cyc) 00:07:18.603 ====================================== 00:07:18.603 poller_cost: 10388 (cyc), 3847 (nsec) 00:07:18.603 00:07:18.603 real 0m1.308s 00:07:18.603 user 0m1.229s 00:07:18.603 sys 0m0.073s 00:07:18.603 00:44:05 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.603 00:44:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.603 ************************************ 00:07:18.603 END TEST thread_poller_perf 00:07:18.603 ************************************ 00:07:18.862 00:44:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:18.862 00:44:05 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:18.862 00:44:05 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.862 00:44:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.862 ************************************ 00:07:18.862 START TEST thread_poller_perf 00:07:18.862 ************************************ 00:07:18.862 00:44:05 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:18.862 [2024-05-15 00:44:05.709118] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:18.862 [2024-05-15 00:44:05.709186] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936908 ] 00:07:18.862 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.862 [2024-05-15 00:44:05.767464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.862 [2024-05-15 00:44:05.886761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.862 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:20.235 ====================================== 00:07:20.235 busy:2702823500 (cyc) 00:07:20.235 total_run_count: 3636000 00:07:20.235 tsc_hz: 2700000000 (cyc) 00:07:20.235 ====================================== 00:07:20.235 poller_cost: 743 (cyc), 275 (nsec) 00:07:20.235 00:07:20.235 real 0m1.302s 00:07:20.235 user 0m1.225s 00:07:20.235 sys 0m0.069s 00:07:20.235 00:44:06 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.235 00:44:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.235 ************************************ 00:07:20.235 END TEST thread_poller_perf 00:07:20.235 ************************************ 00:07:20.235 00:44:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:20.235 00:07:20.235 real 0m2.781s 00:07:20.235 user 0m2.528s 00:07:20.235 sys 0m0.245s 00:07:20.235 00:44:07 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.235 00:44:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.235 ************************************ 00:07:20.235 END TEST thread 00:07:20.235 ************************************ 00:07:20.235 00:44:07 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:20.235 00:44:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:20.235 00:44:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.235 00:44:07 -- common/autotest_common.sh@10 -- # set +x 00:07:20.235 ************************************ 00:07:20.235 START TEST accel 00:07:20.235 ************************************ 00:07:20.235 00:44:07 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:20.235 * Looking for test storage... 00:07:20.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:20.235 00:44:07 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:20.235 00:44:07 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:20.235 00:44:07 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:20.235 00:44:07 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3937067 00:07:20.235 00:44:07 accel -- accel/accel.sh@63 -- # waitforlisten 3937067 00:07:20.235 00:44:07 accel -- common/autotest_common.sh@827 -- # '[' -z 3937067 ']' 00:07:20.235 00:44:07 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.235 00:44:07 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:20.235 00:44:07 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:20.235 00:44:07 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:20.235 00:44:07 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.235 00:44:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.235 00:44:07 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:20.235 00:44:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.235 00:44:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.235 00:44:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.235 00:44:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.235 00:44:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.235 00:44:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:20.235 00:44:07 accel -- accel/accel.sh@41 -- # jq -r . 00:07:20.235 [2024-05-15 00:44:07.197322] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:20.235 [2024-05-15 00:44:07.197413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937067 ] 00:07:20.235 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.235 [2024-05-15 00:44:07.256304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.493 [2024-05-15 00:44:07.373131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.751 00:44:07 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:20.751 00:44:07 accel -- common/autotest_common.sh@860 -- # return 0 00:07:20.751 00:44:07 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:20.751 00:44:07 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:20.751 00:44:07 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:20.751 00:44:07 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:20.751 00:44:07 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:20.751 00:44:07 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:20.751 00:44:07 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:20.751 00:44:07 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.751 00:44:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.751 00:44:07 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.751 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.751 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.751 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.751 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.751 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.751 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.751 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.751 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.751 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.751 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.751 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.751 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.751 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.751 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.751 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.751 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.751 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.751 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.752 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.752 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.752 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.752 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.752 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.752 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.752 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.752 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.752 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.752 00:44:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:20.752 00:44:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:20.752 00:44:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:20.752 00:44:07 accel -- accel/accel.sh@75 -- # killprocess 3937067 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@946 -- # '[' -z 3937067 ']' 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@950 -- # kill -0 3937067 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@951 -- # uname 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3937067 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3937067' 00:07:20.752 killing process with pid 3937067 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@965 -- # kill 3937067 00:07:20.752 00:44:07 accel -- common/autotest_common.sh@970 -- # wait 3937067 00:07:21.011 00:44:08 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:21.011 00:44:08 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:21.011 00:44:08 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:21.011 00:44:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.011 00:44:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.011 00:44:08 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:21.011 00:44:08 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:21.011 00:44:08 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:21.011 00:44:08 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.011 00:44:08 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.011 00:44:08 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.011 00:44:08 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.011 00:44:08 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.011 00:44:08 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:21.011 00:44:08 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:21.011 00:44:08 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.011 00:44:08 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:21.269 00:44:08 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:21.269 00:44:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:21.269 00:44:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.269 00:44:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.269 ************************************ 00:07:21.269 START TEST accel_missing_filename 00:07:21.269 ************************************ 00:07:21.269 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:21.270 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:21.270 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:21.270 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.270 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.270 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.270 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.270 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:21.270 00:44:08 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:21.270 00:44:08 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:21.270 00:44:08 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.270 00:44:08 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.270 00:44:08 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.270 00:44:08 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.270 00:44:08 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.270 00:44:08 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:21.270 00:44:08 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:21.270 [2024-05-15 00:44:08.134773] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:21.270 [2024-05-15 00:44:08.134841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937203 ] 00:07:21.270 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.270 [2024-05-15 00:44:08.193697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.270 [2024-05-15 00:44:08.313450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.528 [2024-05-15 00:44:08.365122] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.528 [2024-05-15 00:44:08.414506] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:21.528 A filename is required. 00:07:21.529 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:21.529 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.529 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:21.529 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.529 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:21.529 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.529 00:07:21.529 real 0m0.408s 00:07:21.529 user 0m0.319s 00:07:21.529 sys 0m0.124s 00:07:21.529 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.529 00:44:08 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:21.529 ************************************ 00:07:21.529 END TEST accel_missing_filename 00:07:21.529 ************************************ 00:07:21.529 00:44:08 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.529 00:44:08 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:21.529 00:44:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.529 00:44:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.529 ************************************ 00:07:21.529 START TEST accel_compress_verify 00:07:21.529 ************************************ 00:07:21.529 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.529 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:21.787 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.787 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.787 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.787 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.787 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.787 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.787 00:44:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.787 00:44:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:21.787 00:44:08 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.787 00:44:08 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.787 00:44:08 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.787 00:44:08 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.787 00:44:08 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.787 00:44:08 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:21.787 00:44:08 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:21.787 [2024-05-15 00:44:08.603220] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:21.787 [2024-05-15 00:44:08.603287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937236 ] 00:07:21.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.787 [2024-05-15 00:44:08.662286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.787 [2024-05-15 00:44:08.781269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.787 [2024-05-15 00:44:08.831574] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.047 [2024-05-15 00:44:08.880772] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:22.047 00:07:22.047 Compression does not support the verify option, aborting. 00:07:22.047 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:22.047 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.047 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:22.047 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:22.047 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:22.047 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.047 00:07:22.047 real 0m0.405s 00:07:22.047 user 0m0.335s 00:07:22.047 sys 0m0.109s 00:07:22.047 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.047 00:44:08 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:22.047 ************************************ 00:07:22.047 END TEST accel_compress_verify 00:07:22.047 ************************************ 00:07:22.047 00:44:09 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:22.047 00:44:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:22.047 00:44:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.047 00:44:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.047 ************************************ 00:07:22.047 START TEST accel_wrong_workload 00:07:22.047 ************************************ 00:07:22.047 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:22.047 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:22.047 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:22.047 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:22.047 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.047 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:22.047 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.047 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:22.047 00:44:09 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:22.047 00:44:09 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:22.047 00:44:09 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.047 00:44:09 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.047 00:44:09 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.047 00:44:09 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.047 00:44:09 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.047 00:44:09 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:22.047 00:44:09 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:22.047 Unsupported workload type: foobar 00:07:22.047 [2024-05-15 00:44:09.067000] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:22.047 accel_perf options: 00:07:22.047 [-h help message] 00:07:22.048 [-q queue depth per core] 00:07:22.048 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:22.048 [-T number of threads per core 00:07:22.048 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:22.048 [-t time in seconds] 00:07:22.048 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:22.048 [ dif_verify, , dif_generate, dif_generate_copy 00:07:22.048 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:22.048 [-l for compress/decompress workloads, name of uncompressed input file 00:07:22.048 [-S for crc32c workload, use this seed value (default 0) 00:07:22.048 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:22.048 [-f for fill workload, use this BYTE value (default 255) 00:07:22.048 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:22.048 [-y verify result if this switch is on] 00:07:22.048 [-a tasks to allocate per core (default: same value as -q)] 00:07:22.048 Can be used to spread operations across a wider range of memory. 00:07:22.048 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:22.048 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.048 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.048 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.048 00:07:22.048 real 0m0.025s 00:07:22.048 user 0m0.013s 00:07:22.048 sys 0m0.012s 00:07:22.048 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.048 00:44:09 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:22.048 ************************************ 00:07:22.048 END TEST accel_wrong_workload 00:07:22.048 ************************************ 00:07:22.048 Error: writing output failed: Broken pipe 00:07:22.048 00:44:09 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:22.048 00:44:09 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:22.048 00:44:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.048 00:44:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.307 ************************************ 00:07:22.307 START TEST accel_negative_buffers 00:07:22.307 ************************************ 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:22.307 00:44:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:22.307 00:44:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:22.307 00:44:09 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.307 00:44:09 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.307 00:44:09 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.307 00:44:09 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.307 00:44:09 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.307 00:44:09 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:22.307 00:44:09 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:22.307 -x option must be non-negative. 00:07:22.307 [2024-05-15 00:44:09.151006] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:22.307 accel_perf options: 00:07:22.307 [-h help message] 00:07:22.307 [-q queue depth per core] 00:07:22.307 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:22.307 [-T number of threads per core 00:07:22.307 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:22.307 [-t time in seconds] 00:07:22.307 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:22.307 [ dif_verify, , dif_generate, dif_generate_copy 00:07:22.307 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:22.307 [-l for compress/decompress workloads, name of uncompressed input file 00:07:22.307 [-S for crc32c workload, use this seed value (default 0) 00:07:22.307 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:22.307 [-f for fill workload, use this BYTE value (default 255) 00:07:22.307 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:22.307 [-y verify result if this switch is on] 00:07:22.307 [-a tasks to allocate per core (default: same value as -q)] 00:07:22.307 Can be used to spread operations across a wider range of memory. 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.307 00:07:22.307 real 0m0.025s 00:07:22.307 user 0m0.016s 00:07:22.307 sys 0m0.010s 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.307 00:44:09 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:22.307 ************************************ 00:07:22.307 END TEST accel_negative_buffers 00:07:22.307 ************************************ 00:07:22.307 Error: writing output failed: Broken pipe 00:07:22.307 00:44:09 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:22.307 00:44:09 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:22.307 00:44:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.307 00:44:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.307 ************************************ 00:07:22.307 START TEST accel_crc32c 00:07:22.307 ************************************ 00:07:22.307 00:44:09 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:22.308 00:44:09 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:22.308 [2024-05-15 00:44:09.235315] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:22.308 [2024-05-15 00:44:09.235386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937384 ] 00:07:22.308 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.308 [2024-05-15 00:44:09.295404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.566 [2024-05-15 00:44:09.416110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.566 00:44:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:23.941 00:44:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.941 00:07:23.941 real 0m1.418s 00:07:23.941 user 0m1.286s 00:07:23.941 sys 0m0.133s 00:07:23.941 00:44:10 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.941 00:44:10 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:23.941 ************************************ 00:07:23.941 END TEST accel_crc32c 00:07:23.941 ************************************ 00:07:23.941 00:44:10 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:23.941 00:44:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:23.941 00:44:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.941 00:44:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.941 ************************************ 00:07:23.941 START TEST accel_crc32c_C2 00:07:23.941 ************************************ 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:23.941 [2024-05-15 00:44:10.716601] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:23.941 [2024-05-15 00:44:10.716673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937509 ] 00:07:23.941 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.941 [2024-05-15 00:44:10.778273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.941 [2024-05-15 00:44:10.896791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.941 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.942 00:44:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.315 00:07:25.315 real 0m1.413s 00:07:25.315 user 0m1.287s 00:07:25.315 sys 0m0.128s 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.315 00:44:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:25.315 ************************************ 00:07:25.315 END TEST accel_crc32c_C2 00:07:25.315 ************************************ 00:07:25.315 00:44:12 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:25.315 00:44:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:25.315 00:44:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.315 00:44:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.315 ************************************ 00:07:25.315 START TEST accel_copy 00:07:25.315 ************************************ 00:07:25.315 00:44:12 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:25.315 00:44:12 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:25.315 [2024-05-15 00:44:12.193212] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:25.315 [2024-05-15 00:44:12.193282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937718 ] 00:07:25.315 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.315 [2024-05-15 00:44:12.253301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.574 [2024-05-15 00:44:12.372150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.574 00:44:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.528 00:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:26.791 00:44:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.791 00:07:26.791 real 0m1.412s 00:07:26.791 user 0m1.293s 00:07:26.791 sys 0m0.119s 00:07:26.791 00:44:13 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.791 00:44:13 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:26.791 ************************************ 00:07:26.791 END TEST accel_copy 00:07:26.791 ************************************ 00:07:26.791 00:44:13 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.791 00:44:13 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:26.791 00:44:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.791 00:44:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.791 ************************************ 00:07:26.791 START TEST accel_fill 00:07:26.791 ************************************ 00:07:26.791 00:44:13 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:26.791 00:44:13 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:26.791 [2024-05-15 00:44:13.659478] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:26.791 [2024-05-15 00:44:13.659562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937847 ] 00:07:26.791 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.791 [2024-05-15 00:44:13.720903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.791 [2024-05-15 00:44:13.840612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.050 00:44:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.423 00:44:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:28.424 00:44:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.424 00:07:28.424 real 0m1.418s 00:07:28.424 user 0m1.289s 00:07:28.424 sys 0m0.130s 00:07:28.424 00:44:15 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.424 00:44:15 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:28.424 ************************************ 00:07:28.424 END TEST accel_fill 00:07:28.424 ************************************ 00:07:28.424 00:44:15 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:28.424 00:44:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:28.424 00:44:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.424 00:44:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.424 ************************************ 00:07:28.424 START TEST accel_copy_crc32c 00:07:28.424 ************************************ 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:28.424 [2024-05-15 00:44:15.138188] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:28.424 [2024-05-15 00:44:15.138255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937969 ] 00:07:28.424 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.424 [2024-05-15 00:44:15.197803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.424 [2024-05-15 00:44:15.317873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.424 00:44:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.799 00:07:29.799 real 0m1.416s 00:07:29.799 user 0m1.296s 00:07:29.799 sys 0m0.122s 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.799 00:44:16 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:29.799 ************************************ 00:07:29.799 END TEST accel_copy_crc32c 00:07:29.799 ************************************ 00:07:29.799 00:44:16 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.799 00:44:16 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:29.799 00:44:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.799 00:44:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.799 ************************************ 00:07:29.799 START TEST accel_copy_crc32c_C2 00:07:29.799 ************************************ 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:29.799 [2024-05-15 00:44:16.613581] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:29.799 [2024-05-15 00:44:16.613658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938182 ] 00:07:29.799 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.799 [2024-05-15 00:44:16.673272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.799 [2024-05-15 00:44:16.792778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.799 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.800 00:44:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.191 00:07:31.191 real 0m1.415s 00:07:31.191 user 0m1.291s 00:07:31.191 sys 0m0.126s 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.191 00:44:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:31.191 ************************************ 00:07:31.191 END TEST accel_copy_crc32c_C2 00:07:31.191 ************************************ 00:07:31.191 00:44:18 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:31.191 00:44:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:31.191 00:44:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.191 00:44:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.191 ************************************ 00:07:31.191 START TEST accel_dualcast 00:07:31.191 ************************************ 00:07:31.191 00:44:18 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.191 00:44:18 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.192 00:44:18 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.192 00:44:18 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.192 00:44:18 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:31.192 00:44:18 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:31.192 [2024-05-15 00:44:18.083619] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:31.192 [2024-05-15 00:44:18.083689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938305 ] 00:07:31.192 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.192 [2024-05-15 00:44:18.142402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.470 [2024-05-15 00:44:18.261084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.470 00:44:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:32.857 00:44:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.857 00:07:32.857 real 0m1.406s 00:07:32.857 user 0m1.273s 00:07:32.857 sys 0m0.135s 00:07:32.857 00:44:19 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.857 00:44:19 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:32.857 ************************************ 00:07:32.857 END TEST accel_dualcast 00:07:32.857 ************************************ 00:07:32.857 00:44:19 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:32.857 00:44:19 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:32.857 00:44:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.857 00:44:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.857 ************************************ 00:07:32.857 START TEST accel_compare 00:07:32.857 ************************************ 00:07:32.857 00:44:19 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:32.857 00:44:19 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:32.857 00:44:19 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:32.857 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:32.858 [2024-05-15 00:44:19.543711] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:32.858 [2024-05-15 00:44:19.543779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938439 ] 00:07:32.858 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.858 [2024-05-15 00:44:19.602638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.858 [2024-05-15 00:44:19.722023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.858 00:44:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.228 00:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.228 00:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:34.229 00:44:20 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.229 00:07:34.229 real 0m1.408s 00:07:34.229 user 0m1.281s 00:07:34.229 sys 0m0.127s 00:07:34.229 00:44:20 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.229 00:44:20 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:34.229 ************************************ 00:07:34.229 END TEST accel_compare 00:07:34.229 ************************************ 00:07:34.229 00:44:20 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:34.229 00:44:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:34.229 00:44:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.229 00:44:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.229 ************************************ 00:07:34.229 START TEST accel_xor 00:07:34.229 ************************************ 00:07:34.229 00:44:20 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:34.229 00:44:20 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:34.229 [2024-05-15 00:44:21.007744] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:34.229 [2024-05-15 00:44:21.007815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938648 ] 00:07:34.229 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.229 [2024-05-15 00:44:21.066205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.229 [2024-05-15 00:44:21.186103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.229 00:44:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.602 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.603 00:07:35.603 real 0m1.412s 00:07:35.603 user 0m1.294s 00:07:35.603 sys 0m0.120s 00:07:35.603 00:44:22 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.603 00:44:22 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:35.603 ************************************ 00:07:35.603 END TEST accel_xor 00:07:35.603 ************************************ 00:07:35.603 00:44:22 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:35.603 00:44:22 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:35.603 00:44:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.603 00:44:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.603 ************************************ 00:07:35.603 START TEST accel_xor 00:07:35.603 ************************************ 00:07:35.603 00:44:22 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:35.603 00:44:22 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:35.603 [2024-05-15 00:44:22.481257] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:35.603 [2024-05-15 00:44:22.481326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938767 ] 00:07:35.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.603 [2024-05-15 00:44:22.540236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.861 [2024-05-15 00:44:22.660159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.861 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.862 00:44:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:37.236 00:44:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.236 00:07:37.236 real 0m1.416s 00:07:37.236 user 0m1.286s 00:07:37.236 sys 0m0.133s 00:07:37.236 00:44:23 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.236 00:44:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:37.236 ************************************ 00:07:37.236 END TEST accel_xor 00:07:37.236 ************************************ 00:07:37.236 00:44:23 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:37.236 00:44:23 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:37.236 00:44:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.236 00:44:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.236 ************************************ 00:07:37.236 START TEST accel_dif_verify 00:07:37.236 ************************************ 00:07:37.236 00:44:23 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:37.236 00:44:23 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:37.236 [2024-05-15 00:44:23.953062] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:37.236 [2024-05-15 00:44:23.953138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938894 ] 00:07:37.236 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.236 [2024-05-15 00:44:24.011738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.236 [2024-05-15 00:44:24.129867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.236 00:44:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:38.610 00:44:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.610 00:07:38.610 real 0m1.409s 00:07:38.610 user 0m1.282s 00:07:38.610 sys 0m0.130s 00:07:38.610 00:44:25 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.610 00:44:25 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:38.610 ************************************ 00:07:38.610 END TEST accel_dif_verify 00:07:38.610 ************************************ 00:07:38.610 00:44:25 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:38.610 00:44:25 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:38.610 00:44:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.610 00:44:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.610 ************************************ 00:07:38.610 START TEST accel_dif_generate 00:07:38.610 ************************************ 00:07:38.610 00:44:25 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:38.610 [2024-05-15 00:44:25.419811] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:38.610 [2024-05-15 00:44:25.419879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939105 ] 00:07:38.610 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.610 [2024-05-15 00:44:25.489758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.610 [2024-05-15 00:44:25.610103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.610 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.611 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.869 00:44:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:39.803 00:44:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.803 00:07:39.803 real 0m1.426s 00:07:39.803 user 0m1.294s 00:07:39.803 sys 0m0.136s 00:07:39.803 00:44:26 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.803 00:44:26 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:39.803 ************************************ 00:07:39.803 END TEST accel_dif_generate 00:07:39.803 ************************************ 00:07:39.803 00:44:26 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:39.803 00:44:26 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:39.803 00:44:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.803 00:44:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.061 ************************************ 00:07:40.061 START TEST accel_dif_generate_copy 00:07:40.062 ************************************ 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:40.062 00:44:26 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:40.062 [2024-05-15 00:44:26.905779] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:40.062 [2024-05-15 00:44:26.905850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939233 ] 00:07:40.062 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.062 [2024-05-15 00:44:26.966000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.062 [2024-05-15 00:44:27.085719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.320 00:44:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.254 00:07:41.254 real 0m1.416s 00:07:41.254 user 0m1.290s 00:07:41.254 sys 0m0.128s 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.254 00:44:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.254 ************************************ 00:07:41.254 END TEST accel_dif_generate_copy 00:07:41.254 ************************************ 00:07:41.518 00:44:28 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:41.518 00:44:28 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.518 00:44:28 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:41.518 00:44:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.518 00:44:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.518 ************************************ 00:07:41.518 START TEST accel_comp 00:07:41.518 ************************************ 00:07:41.518 00:44:28 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:41.518 00:44:28 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:41.518 [2024-05-15 00:44:28.382969] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:41.518 [2024-05-15 00:44:28.383046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939352 ] 00:07:41.518 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.518 [2024-05-15 00:44:28.442460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.518 [2024-05-15 00:44:28.561737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:41.776 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.777 00:44:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.149 00:44:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.150 00:44:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.150 00:44:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:43.150 00:44:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.150 00:07:43.150 real 0m1.416s 00:07:43.150 user 0m1.281s 00:07:43.150 sys 0m0.137s 00:07:43.150 00:44:29 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.150 00:44:29 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:43.150 ************************************ 00:07:43.150 END TEST accel_comp 00:07:43.150 ************************************ 00:07:43.150 00:44:29 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:43.150 00:44:29 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:43.150 00:44:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.150 00:44:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.150 ************************************ 00:07:43.150 START TEST accel_decomp 00:07:43.150 ************************************ 00:07:43.150 00:44:29 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:43.150 00:44:29 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:43.150 [2024-05-15 00:44:29.854465] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:43.150 [2024-05-15 00:44:29.854533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939564 ] 00:07:43.150 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.150 [2024-05-15 00:44:29.913773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.150 [2024-05-15 00:44:30.047411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.150 00:44:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.524 00:44:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.524 00:07:44.524 real 0m1.459s 00:07:44.524 user 0m1.325s 00:07:44.524 sys 0m0.136s 00:07:44.524 00:44:31 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.524 00:44:31 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:44.524 ************************************ 00:07:44.524 END TEST accel_decomp 00:07:44.524 ************************************ 00:07:44.524 00:44:31 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.524 00:44:31 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:44.524 00:44:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.524 00:44:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.524 ************************************ 00:07:44.524 START TEST accel_decmop_full 00:07:44.524 ************************************ 00:07:44.524 00:44:31 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.524 00:44:31 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.525 00:44:31 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.525 00:44:31 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.525 00:44:31 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:44.525 00:44:31 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:44.525 [2024-05-15 00:44:31.369441] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:44.525 [2024-05-15 00:44:31.369523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939693 ] 00:07:44.525 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.525 [2024-05-15 00:44:31.429631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.525 [2024-05-15 00:44:31.549015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.783 00:44:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.157 00:44:32 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.157 00:07:46.157 real 0m1.431s 00:07:46.157 user 0m1.303s 00:07:46.157 sys 0m0.129s 00:07:46.157 00:44:32 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.157 00:44:32 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:46.157 ************************************ 00:07:46.157 END TEST accel_decmop_full 00:07:46.157 ************************************ 00:07:46.157 00:44:32 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.157 00:44:32 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:46.157 00:44:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.157 00:44:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.157 ************************************ 00:07:46.157 START TEST accel_decomp_mcore 00:07:46.157 ************************************ 00:07:46.157 00:44:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.157 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:46.157 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:46.157 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:46.158 00:44:32 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:46.158 [2024-05-15 00:44:32.856530] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:46.158 [2024-05-15 00:44:32.856596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939820 ] 00:07:46.158 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.158 [2024-05-15 00:44:32.916191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.158 [2024-05-15 00:44:33.039548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.158 [2024-05-15 00:44:33.039598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.158 [2024-05-15 00:44:33.039646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.158 [2024-05-15 00:44:33.039649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.158 00:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.530 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.531 00:07:47.531 real 0m1.431s 00:07:47.531 user 0m4.625s 00:07:47.531 sys 0m0.135s 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.531 00:44:34 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:47.531 ************************************ 00:07:47.531 END TEST accel_decomp_mcore 00:07:47.531 ************************************ 00:07:47.531 00:44:34 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.531 00:44:34 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:47.531 00:44:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.531 00:44:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.531 ************************************ 00:07:47.531 START TEST accel_decomp_full_mcore 00:07:47.531 ************************************ 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:47.531 [2024-05-15 00:44:34.347487] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:47.531 [2024-05-15 00:44:34.347561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940025 ] 00:07:47.531 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.531 [2024-05-15 00:44:34.407139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.531 [2024-05-15 00:44:34.530201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.531 [2024-05-15 00:44:34.530280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.531 [2024-05-15 00:44:34.530359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.531 [2024-05-15 00:44:34.530363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.531 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.790 00:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.725 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.726 00:07:48.726 real 0m1.441s 00:07:48.726 user 0m4.678s 00:07:48.726 sys 0m0.131s 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.726 00:44:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:48.726 ************************************ 00:07:48.726 END TEST accel_decomp_full_mcore 00:07:48.726 ************************************ 00:07:48.985 00:44:35 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:48.985 00:44:35 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:48.985 00:44:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.985 00:44:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.985 ************************************ 00:07:48.985 START TEST accel_decomp_mthread 00:07:48.985 ************************************ 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:48.985 00:44:35 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:48.985 [2024-05-15 00:44:35.837146] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:48.985 [2024-05-15 00:44:35.837229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940164 ] 00:07:48.985 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.985 [2024-05-15 00:44:35.898775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.985 [2024-05-15 00:44:36.018091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.244 00:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.619 00:07:50.619 real 0m1.424s 00:07:50.619 user 0m1.292s 00:07:50.619 sys 0m0.133s 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.619 00:44:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:50.619 ************************************ 00:07:50.619 END TEST accel_decomp_mthread 00:07:50.619 ************************************ 00:07:50.619 00:44:37 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.619 00:44:37 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:50.619 00:44:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.619 00:44:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.619 ************************************ 00:07:50.619 START TEST accel_decomp_full_mthread 00:07:50.619 ************************************ 00:07:50.619 00:44:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.619 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:50.619 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:50.619 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.619 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:50.620 [2024-05-15 00:44:37.319558] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:50.620 [2024-05-15 00:44:37.319627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940284 ] 00:07:50.620 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.620 [2024-05-15 00:44:37.378128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.620 [2024-05-15 00:44:37.497697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.620 00:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.994 00:07:51.994 real 0m1.451s 00:07:51.994 user 0m1.322s 00:07:51.994 sys 0m0.132s 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.994 00:44:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:51.994 ************************************ 00:07:51.994 END TEST accel_decomp_full_mthread 00:07:51.994 ************************************ 00:07:51.994 00:44:38 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:51.994 00:44:38 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.994 00:44:38 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:51.994 00:44:38 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:51.994 00:44:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.994 00:44:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.994 00:44:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.995 00:44:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.995 00:44:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.995 00:44:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.995 00:44:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.995 00:44:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:51.995 00:44:38 accel -- accel/accel.sh@41 -- # jq -r . 00:07:51.995 ************************************ 00:07:51.995 START TEST accel_dif_functional_tests 00:07:51.995 ************************************ 00:07:51.995 00:44:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.995 [2024-05-15 00:44:38.858559] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:51.995 [2024-05-15 00:44:38.858655] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940495 ] 00:07:51.995 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.995 [2024-05-15 00:44:38.919482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.995 [2024-05-15 00:44:39.040680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.995 [2024-05-15 00:44:39.040741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.995 [2024-05-15 00:44:39.040745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.253 00:07:52.253 00:07:52.253 CUnit - A unit testing framework for C - Version 2.1-3 00:07:52.253 http://cunit.sourceforge.net/ 00:07:52.253 00:07:52.253 00:07:52.253 Suite: accel_dif 00:07:52.253 Test: verify: DIF generated, GUARD check ...passed 00:07:52.253 Test: verify: DIF generated, APPTAG check ...passed 00:07:52.253 Test: verify: DIF generated, REFTAG check ...passed 00:07:52.253 Test: verify: DIF not generated, GUARD check ...[2024-05-15 00:44:39.124563] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:52.253 [2024-05-15 00:44:39.124630] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:52.253 passed 00:07:52.253 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 00:44:39.124672] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:52.253 [2024-05-15 00:44:39.124719] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:52.253 passed 00:07:52.253 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 00:44:39.124756] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:52.253 [2024-05-15 00:44:39.124797] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:52.253 passed 00:07:52.253 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:52.253 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 00:44:39.124870] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:52.253 passed 00:07:52.253 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:52.253 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:52.253 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:52.253 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 00:44:39.125062] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:52.253 passed 00:07:52.253 Test: generate copy: DIF generated, GUARD check ...passed 00:07:52.253 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:52.253 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:52.253 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:52.253 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:52.253 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:52.253 Test: generate copy: iovecs-len validate ...[2024-05-15 00:44:39.125365] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:52.253 passed 00:07:52.253 Test: generate copy: buffer alignment validate ...passed 00:07:52.253 00:07:52.253 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.253 suites 1 1 n/a 0 0 00:07:52.253 tests 20 20 20 0 0 00:07:52.253 asserts 204 204 204 0 n/a 00:07:52.253 00:07:52.253 Elapsed time = 0.003 seconds 00:07:52.512 00:07:52.512 real 0m0.510s 00:07:52.512 user 0m0.702s 00:07:52.512 sys 0m0.158s 00:07:52.512 00:44:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.512 00:44:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:52.512 ************************************ 00:07:52.512 END TEST accel_dif_functional_tests 00:07:52.512 ************************************ 00:07:52.512 00:07:52.512 real 0m32.260s 00:07:52.512 user 0m35.568s 00:07:52.512 sys 0m4.294s 00:07:52.512 00:44:39 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.512 00:44:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.512 ************************************ 00:07:52.512 END TEST accel 00:07:52.512 ************************************ 00:07:52.512 00:44:39 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:52.512 00:44:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:52.512 00:44:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.512 00:44:39 -- common/autotest_common.sh@10 -- # set +x 00:07:52.512 ************************************ 00:07:52.512 START TEST accel_rpc 00:07:52.512 ************************************ 00:07:52.512 00:44:39 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:52.512 * Looking for test storage... 00:07:52.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:52.512 00:44:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:52.512 00:44:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3940567 00:07:52.512 00:44:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:52.512 00:44:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3940567 00:07:52.512 00:44:39 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3940567 ']' 00:07:52.512 00:44:39 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.512 00:44:39 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.512 00:44:39 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.512 00:44:39 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.512 00:44:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.512 [2024-05-15 00:44:39.509997] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:52.512 [2024-05-15 00:44:39.510094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940567 ] 00:07:52.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.771 [2024-05-15 00:44:39.569507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.771 [2024-05-15 00:44:39.686298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.771 00:44:39 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.771 00:44:39 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:52.771 00:44:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:52.771 00:44:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:52.771 00:44:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:52.771 00:44:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:52.771 00:44:39 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:52.771 00:44:39 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:52.771 00:44:39 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.771 00:44:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.771 ************************************ 00:07:52.771 START TEST accel_assign_opcode 00:07:52.771 ************************************ 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.771 [2024-05-15 00:44:39.782979] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.771 [2024-05-15 00:44:39.791010] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.771 00:44:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:53.029 00:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.029 00:44:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:53.029 00:44:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:53.029 00:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.029 00:44:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:53.029 00:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:53.029 00:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.029 software 00:07:53.029 00:07:53.029 real 0m0.267s 00:07:53.029 user 0m0.038s 00:07:53.029 sys 0m0.011s 00:07:53.029 00:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.029 00:44:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:53.029 ************************************ 00:07:53.029 END TEST accel_assign_opcode 00:07:53.029 ************************************ 00:07:53.029 00:44:40 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3940567 00:07:53.029 00:44:40 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3940567 ']' 00:07:53.029 00:44:40 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3940567 00:07:53.029 00:44:40 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:53.029 00:44:40 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:53.029 00:44:40 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3940567 00:07:53.287 00:44:40 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:53.287 00:44:40 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:53.287 00:44:40 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3940567' 00:07:53.287 killing process with pid 3940567 00:07:53.287 00:44:40 accel_rpc -- common/autotest_common.sh@965 -- # kill 3940567 00:07:53.287 00:44:40 accel_rpc -- common/autotest_common.sh@970 -- # wait 3940567 00:07:53.545 00:07:53.545 real 0m1.015s 00:07:53.545 user 0m1.021s 00:07:53.545 sys 0m0.394s 00:07:53.545 00:44:40 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.545 00:44:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.545 ************************************ 00:07:53.545 END TEST accel_rpc 00:07:53.545 ************************************ 00:07:53.545 00:44:40 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:53.545 00:44:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:53.545 00:44:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.545 00:44:40 -- common/autotest_common.sh@10 -- # set +x 00:07:53.545 ************************************ 00:07:53.545 START TEST app_cmdline 00:07:53.545 ************************************ 00:07:53.545 00:44:40 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:53.545 * Looking for test storage... 00:07:53.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:53.545 00:44:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:53.545 00:44:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3940739 00:07:53.545 00:44:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:53.545 00:44:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3940739 00:07:53.545 00:44:40 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3940739 ']' 00:07:53.545 00:44:40 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.545 00:44:40 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:53.545 00:44:40 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.545 00:44:40 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:53.545 00:44:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.545 [2024-05-15 00:44:40.585750] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:07:53.545 [2024-05-15 00:44:40.585853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940739 ] 00:07:53.803 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.803 [2024-05-15 00:44:40.645313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.804 [2024-05-15 00:44:40.761904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.062 00:44:40 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:54.062 00:44:40 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:54.062 00:44:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:54.320 { 00:07:54.320 "version": "SPDK v24.05-pre git sha1 c06b0c79b", 00:07:54.320 "fields": { 00:07:54.320 "major": 24, 00:07:54.321 "minor": 5, 00:07:54.321 "patch": 0, 00:07:54.321 "suffix": "-pre", 00:07:54.321 "commit": "c06b0c79b" 00:07:54.321 } 00:07:54.321 } 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:54.321 00:44:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:54.321 00:44:41 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.579 request: 00:07:54.579 { 00:07:54.579 "method": "env_dpdk_get_mem_stats", 00:07:54.579 "req_id": 1 00:07:54.579 } 00:07:54.579 Got JSON-RPC error response 00:07:54.579 response: 00:07:54.579 { 00:07:54.579 "code": -32601, 00:07:54.579 "message": "Method not found" 00:07:54.579 } 00:07:54.579 00:44:41 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:54.579 00:44:41 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:54.579 00:44:41 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:54.579 00:44:41 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:54.579 00:44:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3940739 00:07:54.579 00:44:41 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3940739 ']' 00:07:54.579 00:44:41 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3940739 00:07:54.579 00:44:41 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:54.579 00:44:41 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:54.579 00:44:41 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3940739 00:07:54.838 00:44:41 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:54.838 00:44:41 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:54.838 00:44:41 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3940739' 00:07:54.838 killing process with pid 3940739 00:07:54.838 00:44:41 app_cmdline -- common/autotest_common.sh@965 -- # kill 3940739 00:07:54.838 00:44:41 app_cmdline -- common/autotest_common.sh@970 -- # wait 3940739 00:07:55.097 00:07:55.097 real 0m1.492s 00:07:55.097 user 0m1.953s 00:07:55.097 sys 0m0.440s 00:07:55.097 00:44:41 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.097 00:44:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:55.097 ************************************ 00:07:55.097 END TEST app_cmdline 00:07:55.097 ************************************ 00:07:55.097 00:44:41 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:55.097 00:44:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:55.097 00:44:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.097 00:44:41 -- common/autotest_common.sh@10 -- # set +x 00:07:55.097 ************************************ 00:07:55.097 START TEST version 00:07:55.097 ************************************ 00:07:55.097 00:44:42 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:55.097 * Looking for test storage... 00:07:55.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:55.097 00:44:42 version -- app/version.sh@17 -- # get_header_version major 00:07:55.097 00:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:55.097 00:44:42 version -- app/version.sh@14 -- # cut -f2 00:07:55.097 00:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.097 00:44:42 version -- app/version.sh@17 -- # major=24 00:07:55.097 00:44:42 version -- app/version.sh@18 -- # get_header_version minor 00:07:55.097 00:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:55.097 00:44:42 version -- app/version.sh@14 -- # cut -f2 00:07:55.097 00:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.097 00:44:42 version -- app/version.sh@18 -- # minor=5 00:07:55.097 00:44:42 version -- app/version.sh@19 -- # get_header_version patch 00:07:55.097 00:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:55.097 00:44:42 version -- app/version.sh@14 -- # cut -f2 00:07:55.097 00:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.097 00:44:42 version -- app/version.sh@19 -- # patch=0 00:07:55.097 00:44:42 version -- app/version.sh@20 -- # get_header_version suffix 00:07:55.097 00:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:55.097 00:44:42 version -- app/version.sh@14 -- # cut -f2 00:07:55.097 00:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.097 00:44:42 version -- app/version.sh@20 -- # suffix=-pre 00:07:55.097 00:44:42 version -- app/version.sh@22 -- # version=24.5 00:07:55.097 00:44:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:55.097 00:44:42 version -- app/version.sh@28 -- # version=24.5rc0 00:07:55.097 00:44:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:55.097 00:44:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:55.097 00:44:42 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:55.097 00:44:42 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:55.097 00:07:55.097 real 0m0.113s 00:07:55.097 user 0m0.070s 00:07:55.097 sys 0m0.066s 00:07:55.097 00:44:42 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.097 00:44:42 version -- common/autotest_common.sh@10 -- # set +x 00:07:55.097 ************************************ 00:07:55.097 END TEST version 00:07:55.097 ************************************ 00:07:55.356 00:44:42 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:55.356 00:44:42 -- spdk/autotest.sh@194 -- # uname -s 00:07:55.356 00:44:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:55.356 00:44:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:55.356 00:44:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:55.356 00:44:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:55.356 00:44:42 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:55.356 00:44:42 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:55.356 00:44:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:55.356 00:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:55.356 00:44:42 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:55.356 00:44:42 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:55.356 00:44:42 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:07:55.356 00:44:42 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:07:55.356 00:44:42 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:07:55.356 00:44:42 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:07:55.356 00:44:42 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:55.356 00:44:42 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:55.356 00:44:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.356 00:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:55.356 ************************************ 00:07:55.356 START TEST nvmf_tcp 00:07:55.356 ************************************ 00:07:55.356 00:44:42 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:55.356 * Looking for test storage... 00:07:55.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.356 00:44:42 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.356 00:44:42 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.356 00:44:42 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.356 00:44:42 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.356 00:44:42 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.356 00:44:42 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.356 00:44:42 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.356 00:44:42 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:55.356 00:44:42 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:55.357 00:44:42 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:55.357 00:44:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:55.357 00:44:42 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:55.357 00:44:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:55.357 00:44:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.357 00:44:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.357 ************************************ 00:07:55.357 START TEST nvmf_example 00:07:55.357 ************************************ 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:55.357 * Looking for test storage... 00:07:55.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.357 00:44:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.264 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:57.265 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:57.265 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:57.265 Found net devices under 0000:08:00.0: cvl_0_0 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:57.265 Found net devices under 0000:08:00.1: cvl_0_1 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.265 00:44:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:07:57.265 00:07:57.265 --- 10.0.0.2 ping statistics --- 00:07:57.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.265 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:07:57.265 00:07:57.265 --- 10.0.0.1 ping statistics --- 00:07:57.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.265 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3942284 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3942284 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3942284 ']' 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:57.265 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:57.523 00:44:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:57.523 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.743 Initializing NVMe Controllers 00:08:09.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:09.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:09.743 Initialization complete. Launching workers. 00:08:09.743 ======================================================== 00:08:09.743 Latency(us) 00:08:09.743 Device Information : IOPS MiB/s Average min max 00:08:09.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13516.72 52.80 4734.35 743.11 15537.02 00:08:09.743 ======================================================== 00:08:09.743 Total : 13516.72 52.80 4734.35 743.11 15537.02 00:08:09.743 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.743 rmmod nvme_tcp 00:08:09.743 rmmod nvme_fabrics 00:08:09.743 rmmod nvme_keyring 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3942284 ']' 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3942284 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3942284 ']' 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3942284 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3942284 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3942284' 00:08:09.743 killing process with pid 3942284 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3942284 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3942284 00:08:09.743 nvmf threads initialize successfully 00:08:09.743 bdev subsystem init successfully 00:08:09.743 created a nvmf target service 00:08:09.743 create targets's poll groups done 00:08:09.743 all subsystems of target started 00:08:09.743 nvmf target is running 00:08:09.743 all subsystems of target stopped 00:08:09.743 destroy targets's poll groups done 00:08:09.743 destroyed the nvmf target service 00:08:09.743 bdev subsystem finish successfully 00:08:09.743 nvmf threads destroy successfully 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.743 00:44:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.015 00:44:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.015 00:44:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:10.015 00:44:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.015 00:44:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:10.015 00:08:10.015 real 0m14.753s 00:08:10.015 user 0m40.998s 00:08:10.015 sys 0m3.329s 00:08:10.015 00:44:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.015 00:44:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:10.015 ************************************ 00:08:10.015 END TEST nvmf_example 00:08:10.015 ************************************ 00:08:10.276 00:44:57 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:10.277 00:44:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:10.277 00:44:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.277 00:44:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.277 ************************************ 00:08:10.277 START TEST nvmf_filesystem 00:08:10.277 ************************************ 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:10.277 * Looking for test storage... 00:08:10.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:10.277 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:10.278 #define SPDK_CONFIG_H 00:08:10.278 #define SPDK_CONFIG_APPS 1 00:08:10.278 #define SPDK_CONFIG_ARCH native 00:08:10.278 #undef SPDK_CONFIG_ASAN 00:08:10.278 #undef SPDK_CONFIG_AVAHI 00:08:10.278 #undef SPDK_CONFIG_CET 00:08:10.278 #define SPDK_CONFIG_COVERAGE 1 00:08:10.278 #define SPDK_CONFIG_CROSS_PREFIX 00:08:10.278 #undef SPDK_CONFIG_CRYPTO 00:08:10.278 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:10.278 #undef SPDK_CONFIG_CUSTOMOCF 00:08:10.278 #undef SPDK_CONFIG_DAOS 00:08:10.278 #define SPDK_CONFIG_DAOS_DIR 00:08:10.278 #define SPDK_CONFIG_DEBUG 1 00:08:10.278 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:10.278 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:10.278 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:10.278 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:10.278 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:10.278 #undef SPDK_CONFIG_DPDK_UADK 00:08:10.278 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:10.278 #define SPDK_CONFIG_EXAMPLES 1 00:08:10.278 #undef SPDK_CONFIG_FC 00:08:10.278 #define SPDK_CONFIG_FC_PATH 00:08:10.278 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:10.278 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:10.278 #undef SPDK_CONFIG_FUSE 00:08:10.278 #undef SPDK_CONFIG_FUZZER 00:08:10.278 #define SPDK_CONFIG_FUZZER_LIB 00:08:10.278 #undef SPDK_CONFIG_GOLANG 00:08:10.278 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:10.278 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:10.278 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:10.278 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:10.278 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:10.278 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:10.278 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:10.278 #define SPDK_CONFIG_IDXD 1 00:08:10.278 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:10.278 #undef SPDK_CONFIG_IPSEC_MB 00:08:10.278 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:10.278 #define SPDK_CONFIG_ISAL 1 00:08:10.278 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:10.278 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:10.278 #define SPDK_CONFIG_LIBDIR 00:08:10.278 #undef SPDK_CONFIG_LTO 00:08:10.278 #define SPDK_CONFIG_MAX_LCORES 00:08:10.278 #define SPDK_CONFIG_NVME_CUSE 1 00:08:10.278 #undef SPDK_CONFIG_OCF 00:08:10.278 #define SPDK_CONFIG_OCF_PATH 00:08:10.278 #define SPDK_CONFIG_OPENSSL_PATH 00:08:10.278 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:10.278 #define SPDK_CONFIG_PGO_DIR 00:08:10.278 #undef SPDK_CONFIG_PGO_USE 00:08:10.278 #define SPDK_CONFIG_PREFIX /usr/local 00:08:10.278 #undef SPDK_CONFIG_RAID5F 00:08:10.278 #undef SPDK_CONFIG_RBD 00:08:10.278 #define SPDK_CONFIG_RDMA 1 00:08:10.278 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:10.278 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:10.278 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:10.278 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:10.278 #define SPDK_CONFIG_SHARED 1 00:08:10.278 #undef SPDK_CONFIG_SMA 00:08:10.278 #define SPDK_CONFIG_TESTS 1 00:08:10.278 #undef SPDK_CONFIG_TSAN 00:08:10.278 #define SPDK_CONFIG_UBLK 1 00:08:10.278 #define SPDK_CONFIG_UBSAN 1 00:08:10.278 #undef SPDK_CONFIG_UNIT_TESTS 00:08:10.278 #undef SPDK_CONFIG_URING 00:08:10.278 #define SPDK_CONFIG_URING_PATH 00:08:10.278 #undef SPDK_CONFIG_URING_ZNS 00:08:10.278 #undef SPDK_CONFIG_USDT 00:08:10.278 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:10.278 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:10.278 #define SPDK_CONFIG_VFIO_USER 1 00:08:10.278 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:10.278 #define SPDK_CONFIG_VHOST 1 00:08:10.278 #define SPDK_CONFIG_VIRTIO 1 00:08:10.278 #undef SPDK_CONFIG_VTUNE 00:08:10.278 #define SPDK_CONFIG_VTUNE_DIR 00:08:10.278 #define SPDK_CONFIG_WERROR 1 00:08:10.278 #define SPDK_CONFIG_WPDK_DIR 00:08:10.278 #undef SPDK_CONFIG_XNVME 00:08:10.278 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:08:10.278 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:10.279 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j32 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3943557 ]] 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3943557 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.EGXXaw 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EGXXaw/tests/target /tmp/spdk.EGXXaw 00:08:10.280 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=970956800 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4313473024 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=39594483712 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=53546180608 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=13951696896 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=26768379904 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=26773090304 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=10700738560 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=10709237760 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8499200 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=26771722240 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=26773090304 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1368064 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=5354610688 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5354614784 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:10.281 * Looking for test storage... 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=39594483712 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=16166289408 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.281 00:44:57 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.282 00:44:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.186 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.186 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:12.187 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:12.187 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:12.187 Found net devices under 0000:08:00.0: cvl_0_0 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:12.187 Found net devices under 0000:08:00.1: cvl_0_1 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:12.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:08:12.187 00:08:12.187 --- 10.0.0.2 ping statistics --- 00:08:12.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.187 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:12.187 00:08:12.187 --- 10.0.0.1 ping statistics --- 00:08:12.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.187 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.187 00:44:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.187 ************************************ 00:08:12.187 START TEST nvmf_filesystem_no_in_capsule 00:08:12.187 ************************************ 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3944805 00:08:12.187 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.188 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3944805 00:08:12.188 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3944805 ']' 00:08:12.188 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.188 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:12.188 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.188 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:12.188 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.188 [2024-05-15 00:44:59.089449] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:12.188 [2024-05-15 00:44:59.089533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.188 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.188 [2024-05-15 00:44:59.154485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.446 [2024-05-15 00:44:59.274473] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.446 [2024-05-15 00:44:59.274534] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.446 [2024-05-15 00:44:59.274559] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.446 [2024-05-15 00:44:59.274578] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.446 [2024-05-15 00:44:59.274596] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.446 [2024-05-15 00:44:59.274680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.446 [2024-05-15 00:44:59.274736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.446 [2024-05-15 00:44:59.274801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.446 [2024-05-15 00:44:59.274792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 [2024-05-15 00:44:59.420585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.446 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.704 Malloc1 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.704 [2024-05-15 00:44:59.585720] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:12.704 [2024-05-15 00:44:59.586048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.704 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:12.704 { 00:08:12.704 "name": "Malloc1", 00:08:12.704 "aliases": [ 00:08:12.704 "b5ea5202-1731-4c7c-bddd-acb1ac4c484a" 00:08:12.704 ], 00:08:12.704 "product_name": "Malloc disk", 00:08:12.704 "block_size": 512, 00:08:12.704 "num_blocks": 1048576, 00:08:12.704 "uuid": "b5ea5202-1731-4c7c-bddd-acb1ac4c484a", 00:08:12.704 "assigned_rate_limits": { 00:08:12.704 "rw_ios_per_sec": 0, 00:08:12.704 "rw_mbytes_per_sec": 0, 00:08:12.704 "r_mbytes_per_sec": 0, 00:08:12.704 "w_mbytes_per_sec": 0 00:08:12.704 }, 00:08:12.704 "claimed": true, 00:08:12.704 "claim_type": "exclusive_write", 00:08:12.704 "zoned": false, 00:08:12.704 "supported_io_types": { 00:08:12.704 "read": true, 00:08:12.704 "write": true, 00:08:12.704 "unmap": true, 00:08:12.704 "write_zeroes": true, 00:08:12.704 "flush": true, 00:08:12.704 "reset": true, 00:08:12.704 "compare": false, 00:08:12.704 "compare_and_write": false, 00:08:12.704 "abort": true, 00:08:12.704 "nvme_admin": false, 00:08:12.704 "nvme_io": false 00:08:12.704 }, 00:08:12.704 "memory_domains": [ 00:08:12.704 { 00:08:12.704 "dma_device_id": "system", 00:08:12.704 "dma_device_type": 1 00:08:12.704 }, 00:08:12.704 { 00:08:12.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.704 "dma_device_type": 2 00:08:12.704 } 00:08:12.705 ], 00:08:12.705 "driver_specific": {} 00:08:12.705 } 00:08:12.705 ]' 00:08:12.705 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:12.705 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:12.705 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:12.705 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:12.705 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:12.705 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:12.705 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:12.705 00:44:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.269 00:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.269 00:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:13.269 00:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.269 00:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:13.269 00:45:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:15.165 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:15.165 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:15.165 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.165 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:15.165 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:15.166 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:15.423 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:15.988 00:45:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.921 ************************************ 00:08:16.921 START TEST filesystem_ext4 00:08:16.921 ************************************ 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:16.921 00:45:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:16.921 mke2fs 1.46.5 (30-Dec-2021) 00:08:16.921 Discarding device blocks: 0/522240 done 00:08:16.921 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:16.921 Filesystem UUID: d0986f2f-b493-48ce-924e-9dcec83ed8e1 00:08:16.921 Superblock backups stored on blocks: 00:08:16.921 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:16.921 00:08:16.921 Allocating group tables: 0/64 done 00:08:16.921 Writing inode tables: 0/64 done 00:08:16.921 Creating journal (8192 blocks): done 00:08:17.743 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:17.743 00:08:17.743 00:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:17.743 00:45:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.308 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.308 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3944805 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.590 00:08:18.590 real 0m1.570s 00:08:18.590 user 0m0.017s 00:08:18.590 sys 0m0.039s 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:18.590 ************************************ 00:08:18.590 END TEST filesystem_ext4 00:08:18.590 ************************************ 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.590 ************************************ 00:08:18.590 START TEST filesystem_btrfs 00:08:18.590 ************************************ 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:18.590 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:18.846 btrfs-progs v6.6.2 00:08:18.846 See https://btrfs.readthedocs.io for more information. 00:08:18.846 00:08:18.846 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:18.846 NOTE: several default settings have changed in version 5.15, please make sure 00:08:18.846 this does not affect your deployments: 00:08:18.846 - DUP for metadata (-m dup) 00:08:18.846 - enabled no-holes (-O no-holes) 00:08:18.846 - enabled free-space-tree (-R free-space-tree) 00:08:18.846 00:08:18.846 Label: (null) 00:08:18.846 UUID: 89819902-a924-4c0f-b50d-23c9b15d2ec4 00:08:18.846 Node size: 16384 00:08:18.846 Sector size: 4096 00:08:18.846 Filesystem size: 510.00MiB 00:08:18.846 Block group profiles: 00:08:18.846 Data: single 8.00MiB 00:08:18.846 Metadata: DUP 32.00MiB 00:08:18.846 System: DUP 8.00MiB 00:08:18.846 SSD detected: yes 00:08:18.846 Zoned device: no 00:08:18.846 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:18.846 Runtime features: free-space-tree 00:08:18.846 Checksum: crc32c 00:08:18.846 Number of devices: 1 00:08:18.846 Devices: 00:08:18.846 ID SIZE PATH 00:08:18.846 1 510.00MiB /dev/nvme0n1p1 00:08:18.846 00:08:18.846 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:18.846 00:45:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.409 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.409 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:19.409 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.409 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:19.409 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:19.409 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.409 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3944805 00:08:19.409 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.410 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.410 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.410 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.410 00:08:19.410 real 0m0.990s 00:08:19.410 user 0m0.020s 00:08:19.410 sys 0m0.043s 00:08:19.410 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.410 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:19.410 ************************************ 00:08:19.410 END TEST filesystem_btrfs 00:08:19.410 ************************************ 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.667 ************************************ 00:08:19.667 START TEST filesystem_xfs 00:08:19.667 ************************************ 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:19.667 00:45:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:19.667 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:19.667 = sectsz=512 attr=2, projid32bit=1 00:08:19.667 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:19.667 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:19.667 data = bsize=4096 blocks=130560, imaxpct=25 00:08:19.668 = sunit=0 swidth=0 blks 00:08:19.668 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:19.668 log =internal log bsize=4096 blocks=16384, version=2 00:08:19.668 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:19.668 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:20.598 Discarding blocks...Done. 00:08:20.598 00:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:20.598 00:45:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3944805 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.497 00:08:22.497 real 0m2.607s 00:08:22.497 user 0m0.013s 00:08:22.497 sys 0m0.034s 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:22.497 ************************************ 00:08:22.497 END TEST filesystem_xfs 00:08:22.497 ************************************ 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3944805 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3944805 ']' 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3944805 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3944805 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3944805' 00:08:22.497 killing process with pid 3944805 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3944805 00:08:22.497 [2024-05-15 00:45:09.296670] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:22.497 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3944805 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:22.755 00:08:22.755 real 0m10.602s 00:08:22.755 user 0m40.223s 00:08:22.755 sys 0m1.648s 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.755 ************************************ 00:08:22.755 END TEST nvmf_filesystem_no_in_capsule 00:08:22.755 ************************************ 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.755 ************************************ 00:08:22.755 START TEST nvmf_filesystem_in_capsule 00:08:22.755 ************************************ 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3946571 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3946571 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3946571 ']' 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:22.755 00:45:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.755 [2024-05-15 00:45:09.745128] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:22.755 [2024-05-15 00:45:09.745227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.755 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.013 [2024-05-15 00:45:09.812907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.013 [2024-05-15 00:45:09.933238] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.013 [2024-05-15 00:45:09.933299] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.013 [2024-05-15 00:45:09.933315] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.013 [2024-05-15 00:45:09.933328] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.013 [2024-05-15 00:45:09.933340] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.013 [2024-05-15 00:45:09.933405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.013 [2024-05-15 00:45:09.933457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.013 [2024-05-15 00:45:09.933508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.013 [2024-05-15 00:45:09.933511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.013 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:23.013 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:23.013 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.013 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.013 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.270 [2024-05-15 00:45:10.087587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.270 Malloc1 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.270 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.271 [2024-05-15 00:45:10.241398] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:23.271 [2024-05-15 00:45:10.241674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:23.271 { 00:08:23.271 "name": "Malloc1", 00:08:23.271 "aliases": [ 00:08:23.271 "684d95ce-e7a8-4828-9e7f-b0ddaf96836f" 00:08:23.271 ], 00:08:23.271 "product_name": "Malloc disk", 00:08:23.271 "block_size": 512, 00:08:23.271 "num_blocks": 1048576, 00:08:23.271 "uuid": "684d95ce-e7a8-4828-9e7f-b0ddaf96836f", 00:08:23.271 "assigned_rate_limits": { 00:08:23.271 "rw_ios_per_sec": 0, 00:08:23.271 "rw_mbytes_per_sec": 0, 00:08:23.271 "r_mbytes_per_sec": 0, 00:08:23.271 "w_mbytes_per_sec": 0 00:08:23.271 }, 00:08:23.271 "claimed": true, 00:08:23.271 "claim_type": "exclusive_write", 00:08:23.271 "zoned": false, 00:08:23.271 "supported_io_types": { 00:08:23.271 "read": true, 00:08:23.271 "write": true, 00:08:23.271 "unmap": true, 00:08:23.271 "write_zeroes": true, 00:08:23.271 "flush": true, 00:08:23.271 "reset": true, 00:08:23.271 "compare": false, 00:08:23.271 "compare_and_write": false, 00:08:23.271 "abort": true, 00:08:23.271 "nvme_admin": false, 00:08:23.271 "nvme_io": false 00:08:23.271 }, 00:08:23.271 "memory_domains": [ 00:08:23.271 { 00:08:23.271 "dma_device_id": "system", 00:08:23.271 "dma_device_type": 1 00:08:23.271 }, 00:08:23.271 { 00:08:23.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.271 "dma_device_type": 2 00:08:23.271 } 00:08:23.271 ], 00:08:23.271 "driver_specific": {} 00:08:23.271 } 00:08:23.271 ]' 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:23.271 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:23.530 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:23.530 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:23.530 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:23.530 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:23.530 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:23.788 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:23.788 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:23.788 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:23.788 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:23.788 00:45:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:26.312 00:45:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:26.312 00:45:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:26.576 00:45:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:27.948 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:27.948 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:27.948 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:27.948 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.948 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.948 ************************************ 00:08:27.948 START TEST filesystem_in_capsule_ext4 00:08:27.948 ************************************ 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:27.949 mke2fs 1.46.5 (30-Dec-2021) 00:08:27.949 Discarding device blocks: 0/522240 done 00:08:27.949 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:27.949 Filesystem UUID: 366b9f52-ffa7-47d1-ac77-a76a6807b989 00:08:27.949 Superblock backups stored on blocks: 00:08:27.949 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:27.949 00:08:27.949 Allocating group tables: 0/64 done 00:08:27.949 Writing inode tables: 0/64 done 00:08:27.949 Creating journal (8192 blocks): done 00:08:27.949 Writing superblocks and filesystem accounting information: 0/64 done 00:08:27.949 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3946571 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.949 00:08:27.949 real 0m0.375s 00:08:27.949 user 0m0.017s 00:08:27.949 sys 0m0.031s 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.949 00:45:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:27.949 ************************************ 00:08:27.949 END TEST filesystem_in_capsule_ext4 00:08:27.949 ************************************ 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.207 ************************************ 00:08:28.207 START TEST filesystem_in_capsule_btrfs 00:08:28.207 ************************************ 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:28.207 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:28.464 btrfs-progs v6.6.2 00:08:28.464 See https://btrfs.readthedocs.io for more information. 00:08:28.464 00:08:28.464 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:28.464 NOTE: several default settings have changed in version 5.15, please make sure 00:08:28.464 this does not affect your deployments: 00:08:28.464 - DUP for metadata (-m dup) 00:08:28.464 - enabled no-holes (-O no-holes) 00:08:28.465 - enabled free-space-tree (-R free-space-tree) 00:08:28.465 00:08:28.465 Label: (null) 00:08:28.465 UUID: 42eb88b5-cedc-444f-9197-ff9970bcf1aa 00:08:28.465 Node size: 16384 00:08:28.465 Sector size: 4096 00:08:28.465 Filesystem size: 510.00MiB 00:08:28.465 Block group profiles: 00:08:28.465 Data: single 8.00MiB 00:08:28.465 Metadata: DUP 32.00MiB 00:08:28.465 System: DUP 8.00MiB 00:08:28.465 SSD detected: yes 00:08:28.465 Zoned device: no 00:08:28.465 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:28.465 Runtime features: free-space-tree 00:08:28.465 Checksum: crc32c 00:08:28.465 Number of devices: 1 00:08:28.465 Devices: 00:08:28.465 ID SIZE PATH 00:08:28.465 1 510.00MiB /dev/nvme0n1p1 00:08:28.465 00:08:28.465 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:28.465 00:45:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3946571 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.396 00:08:29.396 real 0m1.267s 00:08:29.396 user 0m0.015s 00:08:29.396 sys 0m0.044s 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.396 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:29.396 ************************************ 00:08:29.396 END TEST filesystem_in_capsule_btrfs 00:08:29.397 ************************************ 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.397 ************************************ 00:08:29.397 START TEST filesystem_in_capsule_xfs 00:08:29.397 ************************************ 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:29.397 00:45:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:29.397 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:29.397 = sectsz=512 attr=2, projid32bit=1 00:08:29.397 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:29.397 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:29.397 data = bsize=4096 blocks=130560, imaxpct=25 00:08:29.397 = sunit=0 swidth=0 blks 00:08:29.397 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:29.397 log =internal log bsize=4096 blocks=16384, version=2 00:08:29.397 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:29.397 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:30.769 Discarding blocks...Done. 00:08:30.769 00:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:30.769 00:45:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:33.295 00:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:33.295 00:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:33.295 00:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:33.295 00:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:33.295 00:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:33.295 00:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:33.295 00:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3946571 00:08:33.295 00:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:33.295 00:45:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:33.295 00:08:33.295 real 0m3.656s 00:08:33.295 user 0m0.010s 00:08:33.295 sys 0m0.047s 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:33.295 ************************************ 00:08:33.295 END TEST filesystem_in_capsule_xfs 00:08:33.295 ************************************ 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3946571 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3946571 ']' 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3946571 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:33.295 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3946571 00:08:33.296 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:33.296 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:33.296 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3946571' 00:08:33.296 killing process with pid 3946571 00:08:33.296 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3946571 00:08:33.296 [2024-05-15 00:45:20.233068] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:33.296 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3946571 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:33.556 00:08:33.556 real 0m10.891s 00:08:33.556 user 0m41.492s 00:08:33.556 sys 0m1.602s 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.556 ************************************ 00:08:33.556 END TEST nvmf_filesystem_in_capsule 00:08:33.556 ************************************ 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.556 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.817 rmmod nvme_tcp 00:08:33.817 rmmod nvme_fabrics 00:08:33.817 rmmod nvme_keyring 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.817 00:45:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.725 00:45:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:35.725 00:08:35.725 real 0m25.590s 00:08:35.725 user 1m22.449s 00:08:35.725 sys 0m4.596s 00:08:35.725 00:45:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.725 00:45:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.725 ************************************ 00:08:35.725 END TEST nvmf_filesystem 00:08:35.725 ************************************ 00:08:35.725 00:45:22 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:35.725 00:45:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:35.725 00:45:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.725 00:45:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.725 ************************************ 00:08:35.725 START TEST nvmf_target_discovery 00:08:35.725 ************************************ 00:08:35.725 00:45:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:35.984 * Looking for test storage... 00:08:35.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:35.984 00:45:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:37.885 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.885 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:37.886 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:37.886 Found net devices under 0000:08:00.0: cvl_0_0 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:37.886 Found net devices under 0000:08:00.1: cvl_0_1 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:08:37.886 00:08:37.886 --- 10.0.0.2 ping statistics --- 00:08:37.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.886 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:37.886 00:08:37.886 --- 10.0.0.1 ping statistics --- 00:08:37.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.886 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3949288 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3949288 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3949288 ']' 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:37.886 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.886 [2024-05-15 00:45:24.666257] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:37.886 [2024-05-15 00:45:24.666363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.886 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.886 [2024-05-15 00:45:24.732454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.886 [2024-05-15 00:45:24.852140] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.886 [2024-05-15 00:45:24.852202] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.886 [2024-05-15 00:45:24.852218] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.886 [2024-05-15 00:45:24.852231] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.886 [2024-05-15 00:45:24.852242] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.886 [2024-05-15 00:45:24.852327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.886 [2024-05-15 00:45:24.852389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.886 [2024-05-15 00:45:24.852447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.886 [2024-05-15 00:45:24.852450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.144 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:38.144 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:38.144 00:45:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.144 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.144 00:45:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.144 [2024-05-15 00:45:25.013601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.144 Null1 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.144 [2024-05-15 00:45:25.053673] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:38.144 [2024-05-15 00:45:25.053952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.144 Null2 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:38.144 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 Null3 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 Null4 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.145 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:08:38.403 00:08:38.403 Discovery Log Number of Records 6, Generation counter 6 00:08:38.403 =====Discovery Log Entry 0====== 00:08:38.403 trtype: tcp 00:08:38.403 adrfam: ipv4 00:08:38.403 subtype: current discovery subsystem 00:08:38.403 treq: not required 00:08:38.403 portid: 0 00:08:38.403 trsvcid: 4420 00:08:38.403 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.403 traddr: 10.0.0.2 00:08:38.403 eflags: explicit discovery connections, duplicate discovery information 00:08:38.403 sectype: none 00:08:38.403 =====Discovery Log Entry 1====== 00:08:38.403 trtype: tcp 00:08:38.403 adrfam: ipv4 00:08:38.403 subtype: nvme subsystem 00:08:38.403 treq: not required 00:08:38.403 portid: 0 00:08:38.403 trsvcid: 4420 00:08:38.403 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:38.403 traddr: 10.0.0.2 00:08:38.403 eflags: none 00:08:38.403 sectype: none 00:08:38.403 =====Discovery Log Entry 2====== 00:08:38.403 trtype: tcp 00:08:38.403 adrfam: ipv4 00:08:38.403 subtype: nvme subsystem 00:08:38.403 treq: not required 00:08:38.403 portid: 0 00:08:38.403 trsvcid: 4420 00:08:38.403 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:38.403 traddr: 10.0.0.2 00:08:38.403 eflags: none 00:08:38.403 sectype: none 00:08:38.403 =====Discovery Log Entry 3====== 00:08:38.403 trtype: tcp 00:08:38.403 adrfam: ipv4 00:08:38.403 subtype: nvme subsystem 00:08:38.403 treq: not required 00:08:38.403 portid: 0 00:08:38.403 trsvcid: 4420 00:08:38.403 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:38.403 traddr: 10.0.0.2 00:08:38.403 eflags: none 00:08:38.403 sectype: none 00:08:38.403 =====Discovery Log Entry 4====== 00:08:38.403 trtype: tcp 00:08:38.403 adrfam: ipv4 00:08:38.403 subtype: nvme subsystem 00:08:38.403 treq: not required 00:08:38.403 portid: 0 00:08:38.403 trsvcid: 4420 00:08:38.403 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:38.403 traddr: 10.0.0.2 00:08:38.403 eflags: none 00:08:38.403 sectype: none 00:08:38.403 =====Discovery Log Entry 5====== 00:08:38.403 trtype: tcp 00:08:38.403 adrfam: ipv4 00:08:38.403 subtype: discovery subsystem referral 00:08:38.403 treq: not required 00:08:38.403 portid: 0 00:08:38.403 trsvcid: 4430 00:08:38.403 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.403 traddr: 10.0.0.2 00:08:38.403 eflags: none 00:08:38.403 sectype: none 00:08:38.403 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:38.403 Perform nvmf subsystem discovery via RPC 00:08:38.403 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:38.403 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.403 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.403 [ 00:08:38.403 { 00:08:38.403 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:38.403 "subtype": "Discovery", 00:08:38.403 "listen_addresses": [ 00:08:38.403 { 00:08:38.403 "trtype": "TCP", 00:08:38.403 "adrfam": "IPv4", 00:08:38.403 "traddr": "10.0.0.2", 00:08:38.403 "trsvcid": "4420" 00:08:38.403 } 00:08:38.403 ], 00:08:38.403 "allow_any_host": true, 00:08:38.403 "hosts": [] 00:08:38.403 }, 00:08:38.403 { 00:08:38.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.403 "subtype": "NVMe", 00:08:38.403 "listen_addresses": [ 00:08:38.403 { 00:08:38.403 "trtype": "TCP", 00:08:38.403 "adrfam": "IPv4", 00:08:38.403 "traddr": "10.0.0.2", 00:08:38.403 "trsvcid": "4420" 00:08:38.403 } 00:08:38.403 ], 00:08:38.403 "allow_any_host": true, 00:08:38.403 "hosts": [], 00:08:38.403 "serial_number": "SPDK00000000000001", 00:08:38.403 "model_number": "SPDK bdev Controller", 00:08:38.403 "max_namespaces": 32, 00:08:38.403 "min_cntlid": 1, 00:08:38.403 "max_cntlid": 65519, 00:08:38.403 "namespaces": [ 00:08:38.403 { 00:08:38.403 "nsid": 1, 00:08:38.403 "bdev_name": "Null1", 00:08:38.404 "name": "Null1", 00:08:38.404 "nguid": "8D9DDD1487FD45A29495C2617CFBD344", 00:08:38.404 "uuid": "8d9ddd14-87fd-45a2-9495-c2617cfbd344" 00:08:38.404 } 00:08:38.404 ] 00:08:38.404 }, 00:08:38.404 { 00:08:38.404 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:38.404 "subtype": "NVMe", 00:08:38.404 "listen_addresses": [ 00:08:38.404 { 00:08:38.404 "trtype": "TCP", 00:08:38.404 "adrfam": "IPv4", 00:08:38.404 "traddr": "10.0.0.2", 00:08:38.404 "trsvcid": "4420" 00:08:38.404 } 00:08:38.404 ], 00:08:38.404 "allow_any_host": true, 00:08:38.404 "hosts": [], 00:08:38.404 "serial_number": "SPDK00000000000002", 00:08:38.404 "model_number": "SPDK bdev Controller", 00:08:38.404 "max_namespaces": 32, 00:08:38.404 "min_cntlid": 1, 00:08:38.404 "max_cntlid": 65519, 00:08:38.404 "namespaces": [ 00:08:38.404 { 00:08:38.404 "nsid": 1, 00:08:38.404 "bdev_name": "Null2", 00:08:38.404 "name": "Null2", 00:08:38.404 "nguid": "817B36F832AD40818BEFE7B2FB48FB74", 00:08:38.404 "uuid": "817b36f8-32ad-4081-8bef-e7b2fb48fb74" 00:08:38.404 } 00:08:38.404 ] 00:08:38.404 }, 00:08:38.404 { 00:08:38.404 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:38.404 "subtype": "NVMe", 00:08:38.404 "listen_addresses": [ 00:08:38.404 { 00:08:38.404 "trtype": "TCP", 00:08:38.404 "adrfam": "IPv4", 00:08:38.404 "traddr": "10.0.0.2", 00:08:38.404 "trsvcid": "4420" 00:08:38.404 } 00:08:38.404 ], 00:08:38.404 "allow_any_host": true, 00:08:38.404 "hosts": [], 00:08:38.404 "serial_number": "SPDK00000000000003", 00:08:38.404 "model_number": "SPDK bdev Controller", 00:08:38.404 "max_namespaces": 32, 00:08:38.404 "min_cntlid": 1, 00:08:38.404 "max_cntlid": 65519, 00:08:38.404 "namespaces": [ 00:08:38.404 { 00:08:38.404 "nsid": 1, 00:08:38.404 "bdev_name": "Null3", 00:08:38.404 "name": "Null3", 00:08:38.404 "nguid": "A1382BCCEC7340A3931CAC70221FCFD7", 00:08:38.404 "uuid": "a1382bcc-ec73-40a3-931c-ac70221fcfd7" 00:08:38.404 } 00:08:38.404 ] 00:08:38.404 }, 00:08:38.404 { 00:08:38.404 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:38.404 "subtype": "NVMe", 00:08:38.404 "listen_addresses": [ 00:08:38.404 { 00:08:38.404 "trtype": "TCP", 00:08:38.404 "adrfam": "IPv4", 00:08:38.404 "traddr": "10.0.0.2", 00:08:38.404 "trsvcid": "4420" 00:08:38.404 } 00:08:38.404 ], 00:08:38.404 "allow_any_host": true, 00:08:38.404 "hosts": [], 00:08:38.404 "serial_number": "SPDK00000000000004", 00:08:38.404 "model_number": "SPDK bdev Controller", 00:08:38.404 "max_namespaces": 32, 00:08:38.404 "min_cntlid": 1, 00:08:38.404 "max_cntlid": 65519, 00:08:38.404 "namespaces": [ 00:08:38.404 { 00:08:38.404 "nsid": 1, 00:08:38.404 "bdev_name": "Null4", 00:08:38.404 "name": "Null4", 00:08:38.404 "nguid": "34EE90AD404449EB93CD000897CE758E", 00:08:38.404 "uuid": "34ee90ad-4044-49eb-93cd-000897ce758e" 00:08:38.404 } 00:08:38.404 ] 00:08:38.404 } 00:08:38.404 ] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.404 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.404 rmmod nvme_tcp 00:08:38.404 rmmod nvme_fabrics 00:08:38.404 rmmod nvme_keyring 00:08:38.663 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.663 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:38.663 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:38.663 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3949288 ']' 00:08:38.663 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3949288 00:08:38.663 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3949288 ']' 00:08:38.663 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3949288 00:08:38.663 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:38.663 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3949288 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3949288' 00:08:38.664 killing process with pid 3949288 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3949288 00:08:38.664 [2024-05-15 00:45:25.497940] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3949288 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.664 00:45:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.198 00:45:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:41.198 00:08:41.198 real 0m4.990s 00:08:41.198 user 0m4.024s 00:08:41.198 sys 0m1.545s 00:08:41.198 00:45:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.198 00:45:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.198 ************************************ 00:08:41.198 END TEST nvmf_target_discovery 00:08:41.198 ************************************ 00:08:41.198 00:45:27 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:41.199 00:45:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:41.199 00:45:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:41.199 00:45:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.199 ************************************ 00:08:41.199 START TEST nvmf_referrals 00:08:41.199 ************************************ 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:41.199 * Looking for test storage... 00:08:41.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:41.199 00:45:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:42.579 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:42.579 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:42.579 Found net devices under 0000:08:00.0: cvl_0_0 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:42.579 Found net devices under 0000:08:00.1: cvl_0_1 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:08:42.579 00:08:42.579 --- 10.0.0.2 ping statistics --- 00:08:42.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.579 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:08:42.579 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:08:42.838 00:08:42.838 --- 10.0.0.1 ping statistics --- 00:08:42.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.838 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3950904 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3950904 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3950904 ']' 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:42.838 00:45:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 [2024-05-15 00:45:29.714867] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:42.838 [2024-05-15 00:45:29.714978] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.838 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.838 [2024-05-15 00:45:29.782340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.096 [2024-05-15 00:45:29.902768] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.096 [2024-05-15 00:45:29.902831] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.096 [2024-05-15 00:45:29.902847] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.096 [2024-05-15 00:45:29.902860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.096 [2024-05-15 00:45:29.902872] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.096 [2024-05-15 00:45:29.904957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.096 [2024-05-15 00:45:29.905042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.096 [2024-05-15 00:45:29.905116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.096 [2024-05-15 00:45:29.905151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.096 [2024-05-15 00:45:30.057637] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.096 [2024-05-15 00:45:30.069533] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:43.096 [2024-05-15 00:45:30.069816] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.096 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.354 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.612 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:43.895 00:45:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.176 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:44.176 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:44.176 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:44.176 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.176 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.176 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.176 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.176 00:45:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.176 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.456 rmmod nvme_tcp 00:08:44.456 rmmod nvme_fabrics 00:08:44.456 rmmod nvme_keyring 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3950904 ']' 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3950904 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3950904 ']' 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3950904 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3950904 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3950904' 00:08:44.456 killing process with pid 3950904 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3950904 00:08:44.456 [2024-05-15 00:45:31.366361] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:44.456 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3950904 00:08:44.715 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.715 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.715 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.715 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.715 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.715 00:45:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.715 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.715 00:45:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.620 00:45:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:46.620 00:08:46.620 real 0m5.819s 00:08:46.620 user 0m7.927s 00:08:46.620 sys 0m1.702s 00:08:46.620 00:45:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:46.620 00:45:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.620 ************************************ 00:08:46.620 END TEST nvmf_referrals 00:08:46.620 ************************************ 00:08:46.620 00:45:33 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:46.620 00:45:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:46.620 00:45:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:46.620 00:45:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:46.879 ************************************ 00:08:46.879 START TEST nvmf_connect_disconnect 00:08:46.879 ************************************ 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:46.879 * Looking for test storage... 00:08:46.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.879 00:45:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:48.789 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:48.789 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:48.789 Found net devices under 0000:08:00.0: cvl_0_0 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:48.789 Found net devices under 0000:08:00.1: cvl_0_1 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:08:48.789 00:08:48.789 --- 10.0.0.2 ping statistics --- 00:08:48.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.789 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:08:48.789 00:08:48.789 --- 10.0.0.1 ping statistics --- 00:08:48.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.789 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3952609 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3952609 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3952609 ']' 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:48.789 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.789 [2024-05-15 00:45:35.547666] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:08:48.789 [2024-05-15 00:45:35.547757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.789 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.789 [2024-05-15 00:45:35.612291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.789 [2024-05-15 00:45:35.729132] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.789 [2024-05-15 00:45:35.729193] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.789 [2024-05-15 00:45:35.729208] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.789 [2024-05-15 00:45:35.729221] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.789 [2024-05-15 00:45:35.729232] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.789 [2024-05-15 00:45:35.729309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.789 [2024-05-15 00:45:35.729390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.789 [2024-05-15 00:45:35.729442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.789 [2024-05-15 00:45:35.729446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.046 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:49.046 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:49.046 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.046 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.047 [2024-05-15 00:45:35.876489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.047 [2024-05-15 00:45:35.925028] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:49.047 [2024-05-15 00:45:35.925302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:49.047 00:45:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:51.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.679 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:01.679 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:01.679 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.679 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:01.679 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.679 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:01.679 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.679 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.679 rmmod nvme_tcp 00:09:01.938 rmmod nvme_fabrics 00:09:01.938 rmmod nvme_keyring 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3952609 ']' 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3952609 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3952609 ']' 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3952609 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3952609 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3952609' 00:09:01.938 killing process with pid 3952609 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3952609 00:09:01.938 [2024-05-15 00:45:48.791956] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:01.938 00:45:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3952609 00:09:02.198 00:45:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:02.198 00:45:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:02.198 00:45:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:02.198 00:45:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.198 00:45:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.198 00:45:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.198 00:45:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.198 00:45:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.112 00:45:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:04.112 00:09:04.112 real 0m17.378s 00:09:04.112 user 0m52.362s 00:09:04.112 sys 0m2.844s 00:09:04.112 00:45:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:04.112 00:45:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:04.112 ************************************ 00:09:04.112 END TEST nvmf_connect_disconnect 00:09:04.112 ************************************ 00:09:04.113 00:45:51 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:04.113 00:45:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:04.113 00:45:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:04.113 00:45:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.113 ************************************ 00:09:04.113 START TEST nvmf_multitarget 00:09:04.113 ************************************ 00:09:04.113 00:45:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:04.371 * Looking for test storage... 00:09:04.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.371 00:45:51 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:04.372 00:45:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:06.275 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:06.275 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:06.275 Found net devices under 0000:08:00.0: cvl_0_0 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:06.275 Found net devices under 0000:08:00.1: cvl_0_1 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:09:06.275 00:09:06.275 --- 10.0.0.2 ping statistics --- 00:09:06.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.275 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:09:06.275 00:09:06.275 --- 10.0.0.1 ping statistics --- 00:09:06.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.275 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.275 00:45:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3955435 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3955435 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3955435 ']' 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.275 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:06.276 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.276 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:06.276 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.276 [2024-05-15 00:45:53.063514] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:06.276 [2024-05-15 00:45:53.063618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.276 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.276 [2024-05-15 00:45:53.133742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.276 [2024-05-15 00:45:53.253861] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.276 [2024-05-15 00:45:53.253925] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.276 [2024-05-15 00:45:53.253955] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.276 [2024-05-15 00:45:53.253974] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.276 [2024-05-15 00:45:53.253987] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.276 [2024-05-15 00:45:53.254046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.276 [2024-05-15 00:45:53.254095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.276 [2024-05-15 00:45:53.254182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.276 [2024-05-15 00:45:53.254215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:06.534 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:06.792 "nvmf_tgt_1" 00:09:06.792 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:06.792 "nvmf_tgt_2" 00:09:06.792 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:06.792 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:07.049 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:07.049 00:45:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:07.049 true 00:09:07.049 00:45:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:07.307 true 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.307 rmmod nvme_tcp 00:09:07.307 rmmod nvme_fabrics 00:09:07.307 rmmod nvme_keyring 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3955435 ']' 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3955435 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3955435 ']' 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3955435 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:07.307 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3955435 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3955435' 00:09:07.566 killing process with pid 3955435 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3955435 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3955435 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.566 00:45:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.106 00:45:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.106 00:09:10.106 real 0m5.509s 00:09:10.106 user 0m6.710s 00:09:10.106 sys 0m1.692s 00:09:10.106 00:45:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.106 00:45:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:10.106 ************************************ 00:09:10.106 END TEST nvmf_multitarget 00:09:10.106 ************************************ 00:09:10.107 00:45:56 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:10.107 00:45:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:10.107 00:45:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.107 00:45:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.107 ************************************ 00:09:10.107 START TEST nvmf_rpc 00:09:10.107 ************************************ 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:10.107 * Looking for test storage... 00:09:10.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.107 00:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:11.481 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:11.481 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:11.481 Found net devices under 0000:08:00.0: cvl_0_0 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:11.481 Found net devices under 0000:08:00.1: cvl_0_1 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:11.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:09:11.481 00:09:11.481 --- 10.0.0.2 ping statistics --- 00:09:11.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.481 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:11.481 00:09:11.481 --- 10.0.0.1 ping statistics --- 00:09:11.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.481 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3957065 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3957065 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3957065 ']' 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:11.481 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.738 [2024-05-15 00:45:58.562883] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:11.738 [2024-05-15 00:45:58.562986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.738 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.738 [2024-05-15 00:45:58.627097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.738 [2024-05-15 00:45:58.743801] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.738 [2024-05-15 00:45:58.743862] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.738 [2024-05-15 00:45:58.743877] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.738 [2024-05-15 00:45:58.743890] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.738 [2024-05-15 00:45:58.743902] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.738 [2024-05-15 00:45:58.743992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.738 [2024-05-15 00:45:58.744320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.738 [2024-05-15 00:45:58.744412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.738 [2024-05-15 00:45:58.744417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:11.995 "tick_rate": 2700000000, 00:09:11.995 "poll_groups": [ 00:09:11.995 { 00:09:11.995 "name": "nvmf_tgt_poll_group_000", 00:09:11.995 "admin_qpairs": 0, 00:09:11.995 "io_qpairs": 0, 00:09:11.995 "current_admin_qpairs": 0, 00:09:11.995 "current_io_qpairs": 0, 00:09:11.995 "pending_bdev_io": 0, 00:09:11.995 "completed_nvme_io": 0, 00:09:11.995 "transports": [] 00:09:11.995 }, 00:09:11.995 { 00:09:11.995 "name": "nvmf_tgt_poll_group_001", 00:09:11.995 "admin_qpairs": 0, 00:09:11.995 "io_qpairs": 0, 00:09:11.995 "current_admin_qpairs": 0, 00:09:11.995 "current_io_qpairs": 0, 00:09:11.995 "pending_bdev_io": 0, 00:09:11.995 "completed_nvme_io": 0, 00:09:11.995 "transports": [] 00:09:11.995 }, 00:09:11.995 { 00:09:11.995 "name": "nvmf_tgt_poll_group_002", 00:09:11.995 "admin_qpairs": 0, 00:09:11.995 "io_qpairs": 0, 00:09:11.995 "current_admin_qpairs": 0, 00:09:11.995 "current_io_qpairs": 0, 00:09:11.995 "pending_bdev_io": 0, 00:09:11.995 "completed_nvme_io": 0, 00:09:11.995 "transports": [] 00:09:11.995 }, 00:09:11.995 { 00:09:11.995 "name": "nvmf_tgt_poll_group_003", 00:09:11.995 "admin_qpairs": 0, 00:09:11.995 "io_qpairs": 0, 00:09:11.995 "current_admin_qpairs": 0, 00:09:11.995 "current_io_qpairs": 0, 00:09:11.995 "pending_bdev_io": 0, 00:09:11.995 "completed_nvme_io": 0, 00:09:11.995 "transports": [] 00:09:11.995 } 00:09:11.995 ] 00:09:11.995 }' 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.995 [2024-05-15 00:45:58.990872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.995 00:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.995 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.995 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:11.995 "tick_rate": 2700000000, 00:09:11.995 "poll_groups": [ 00:09:11.995 { 00:09:11.995 "name": "nvmf_tgt_poll_group_000", 00:09:11.995 "admin_qpairs": 0, 00:09:11.995 "io_qpairs": 0, 00:09:11.995 "current_admin_qpairs": 0, 00:09:11.995 "current_io_qpairs": 0, 00:09:11.995 "pending_bdev_io": 0, 00:09:11.995 "completed_nvme_io": 0, 00:09:11.995 "transports": [ 00:09:11.995 { 00:09:11.995 "trtype": "TCP" 00:09:11.995 } 00:09:11.995 ] 00:09:11.995 }, 00:09:11.995 { 00:09:11.995 "name": "nvmf_tgt_poll_group_001", 00:09:11.995 "admin_qpairs": 0, 00:09:11.995 "io_qpairs": 0, 00:09:11.995 "current_admin_qpairs": 0, 00:09:11.995 "current_io_qpairs": 0, 00:09:11.995 "pending_bdev_io": 0, 00:09:11.995 "completed_nvme_io": 0, 00:09:11.995 "transports": [ 00:09:11.995 { 00:09:11.995 "trtype": "TCP" 00:09:11.995 } 00:09:11.995 ] 00:09:11.995 }, 00:09:11.995 { 00:09:11.995 "name": "nvmf_tgt_poll_group_002", 00:09:11.995 "admin_qpairs": 0, 00:09:11.995 "io_qpairs": 0, 00:09:11.995 "current_admin_qpairs": 0, 00:09:11.995 "current_io_qpairs": 0, 00:09:11.995 "pending_bdev_io": 0, 00:09:11.995 "completed_nvme_io": 0, 00:09:11.995 "transports": [ 00:09:11.995 { 00:09:11.995 "trtype": "TCP" 00:09:11.995 } 00:09:11.995 ] 00:09:11.995 }, 00:09:11.995 { 00:09:11.995 "name": "nvmf_tgt_poll_group_003", 00:09:11.995 "admin_qpairs": 0, 00:09:11.995 "io_qpairs": 0, 00:09:11.995 "current_admin_qpairs": 0, 00:09:11.995 "current_io_qpairs": 0, 00:09:11.995 "pending_bdev_io": 0, 00:09:11.995 "completed_nvme_io": 0, 00:09:11.995 "transports": [ 00:09:11.995 { 00:09:11.995 "trtype": "TCP" 00:09:11.995 } 00:09:11.995 ] 00:09:11.995 } 00:09:11.995 ] 00:09:11.995 }' 00:09:11.995 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:11.995 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:11.995 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:11.995 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.253 Malloc1 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.253 [2024-05-15 00:45:59.145249] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:12.253 [2024-05-15 00:45:59.145527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:12.253 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:09:12.254 [2024-05-15 00:45:59.167993] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:09:12.254 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:12.254 could not add new controller: failed to write to nvme-fabrics device 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.254 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.820 00:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.820 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:12.820 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.820 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:12.820 00:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:14.717 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:14.717 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:14.717 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.717 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.718 [2024-05-15 00:46:01.755701] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:09:14.718 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:14.718 could not add new controller: failed to write to nvme-fabrics device 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.718 00:46:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.282 00:46:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.282 00:46:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:15.282 00:46:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.282 00:46:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:15.282 00:46:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:17.176 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 [2024-05-15 00:46:04.261584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.434 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.692 00:46:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.692 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:17.692 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.692 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:17.692 00:46:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.216 [2024-05-15 00:46:06.777021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.216 00:46:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:20.216 00:46:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.216 00:46:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:20.216 00:46:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.216 00:46:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:20.216 00:46:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 [2024-05-15 00:46:09.372493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:22.822 00:46:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:25.350 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.351 [2024-05-15 00:46:11.893110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.351 00:46:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.351 00:46:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.351 00:46:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:25.351 00:46:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.351 00:46:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:25.351 00:46:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.880 [2024-05-15 00:46:14.420295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:27.880 00:46:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.408 [2024-05-15 00:46:16.939071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.408 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 [2024-05-15 00:46:16.987128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 [2024-05-15 00:46:17.035278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 [2024-05-15 00:46:17.083434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 [2024-05-15 00:46:17.131620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.409 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:30.409 "tick_rate": 2700000000, 00:09:30.409 "poll_groups": [ 00:09:30.409 { 00:09:30.409 "name": "nvmf_tgt_poll_group_000", 00:09:30.409 "admin_qpairs": 2, 00:09:30.409 "io_qpairs": 56, 00:09:30.409 "current_admin_qpairs": 0, 00:09:30.409 "current_io_qpairs": 0, 00:09:30.409 "pending_bdev_io": 0, 00:09:30.409 "completed_nvme_io": 67, 00:09:30.409 "transports": [ 00:09:30.409 { 00:09:30.409 "trtype": "TCP" 00:09:30.409 } 00:09:30.409 ] 00:09:30.409 }, 00:09:30.409 { 00:09:30.409 "name": "nvmf_tgt_poll_group_001", 00:09:30.409 "admin_qpairs": 2, 00:09:30.410 "io_qpairs": 56, 00:09:30.410 "current_admin_qpairs": 0, 00:09:30.410 "current_io_qpairs": 0, 00:09:30.410 "pending_bdev_io": 0, 00:09:30.410 "completed_nvme_io": 106, 00:09:30.410 "transports": [ 00:09:30.410 { 00:09:30.410 "trtype": "TCP" 00:09:30.410 } 00:09:30.410 ] 00:09:30.410 }, 00:09:30.410 { 00:09:30.410 "name": "nvmf_tgt_poll_group_002", 00:09:30.410 "admin_qpairs": 1, 00:09:30.410 "io_qpairs": 56, 00:09:30.410 "current_admin_qpairs": 0, 00:09:30.410 "current_io_qpairs": 0, 00:09:30.410 "pending_bdev_io": 0, 00:09:30.410 "completed_nvme_io": 109, 00:09:30.410 "transports": [ 00:09:30.410 { 00:09:30.410 "trtype": "TCP" 00:09:30.410 } 00:09:30.410 ] 00:09:30.410 }, 00:09:30.410 { 00:09:30.410 "name": "nvmf_tgt_poll_group_003", 00:09:30.410 "admin_qpairs": 2, 00:09:30.410 "io_qpairs": 56, 00:09:30.410 "current_admin_qpairs": 0, 00:09:30.410 "current_io_qpairs": 0, 00:09:30.410 "pending_bdev_io": 0, 00:09:30.410 "completed_nvme_io": 292, 00:09:30.410 "transports": [ 00:09:30.410 { 00:09:30.410 "trtype": "TCP" 00:09:30.410 } 00:09:30.410 ] 00:09:30.410 } 00:09:30.410 ] 00:09:30.410 }' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.410 rmmod nvme_tcp 00:09:30.410 rmmod nvme_fabrics 00:09:30.410 rmmod nvme_keyring 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3957065 ']' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3957065 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3957065 ']' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3957065 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3957065 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3957065' 00:09:30.410 killing process with pid 3957065 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3957065 00:09:30.410 [2024-05-15 00:46:17.362066] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:30.410 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3957065 00:09:30.668 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:30.668 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:30.668 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:30.668 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:30.669 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:30.669 00:46:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.669 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.669 00:46:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.205 00:46:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.205 00:09:33.205 real 0m22.947s 00:09:33.205 user 1m14.287s 00:09:33.205 sys 0m3.521s 00:09:33.205 00:46:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.205 00:46:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.205 ************************************ 00:09:33.205 END TEST nvmf_rpc 00:09:33.205 ************************************ 00:09:33.205 00:46:19 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.205 00:46:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:33.205 00:46:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.205 00:46:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.205 ************************************ 00:09:33.205 START TEST nvmf_invalid 00:09:33.205 ************************************ 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.205 * Looking for test storage... 00:09:33.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:33.205 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.206 00:46:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:34.586 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.586 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:34.586 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:34.586 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:34.586 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:34.586 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:34.586 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:34.586 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:34.587 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:34.587 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:34.587 Found net devices under 0000:08:00.0: cvl_0_0 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:34.587 Found net devices under 0000:08:00.1: cvl_0_1 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:34.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:09:34.587 00:09:34.587 --- 10.0.0.2 ping statistics --- 00:09:34.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.587 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:09:34.587 00:09:34.587 --- 10.0.0.1 ping statistics --- 00:09:34.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.587 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3960447 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3960447 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3960447 ']' 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:34.587 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:34.587 [2024-05-15 00:46:21.630113] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:34.587 [2024-05-15 00:46:21.630204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.845 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.845 [2024-05-15 00:46:21.694819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.845 [2024-05-15 00:46:21.811658] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.845 [2024-05-15 00:46:21.811718] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.846 [2024-05-15 00:46:21.811733] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.846 [2024-05-15 00:46:21.811746] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.846 [2024-05-15 00:46:21.811759] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.846 [2024-05-15 00:46:21.811852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.846 [2024-05-15 00:46:21.811904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.846 [2024-05-15 00:46:21.811959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.846 [2024-05-15 00:46:21.811962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.103 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:35.103 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:09:35.103 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.103 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.103 00:46:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:35.103 00:46:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.103 00:46:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:35.103 00:46:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10500 00:09:35.361 [2024-05-15 00:46:22.222407] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:35.361 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:35.361 { 00:09:35.361 "nqn": "nqn.2016-06.io.spdk:cnode10500", 00:09:35.361 "tgt_name": "foobar", 00:09:35.361 "method": "nvmf_create_subsystem", 00:09:35.361 "req_id": 1 00:09:35.361 } 00:09:35.361 Got JSON-RPC error response 00:09:35.361 response: 00:09:35.361 { 00:09:35.361 "code": -32603, 00:09:35.361 "message": "Unable to find target foobar" 00:09:35.361 }' 00:09:35.361 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:35.361 { 00:09:35.361 "nqn": "nqn.2016-06.io.spdk:cnode10500", 00:09:35.361 "tgt_name": "foobar", 00:09:35.361 "method": "nvmf_create_subsystem", 00:09:35.361 "req_id": 1 00:09:35.361 } 00:09:35.361 Got JSON-RPC error response 00:09:35.361 response: 00:09:35.361 { 00:09:35.361 "code": -32603, 00:09:35.361 "message": "Unable to find target foobar" 00:09:35.361 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:35.361 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:35.361 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23518 00:09:35.619 [2024-05-15 00:46:22.519444] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23518: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:35.619 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:35.619 { 00:09:35.619 "nqn": "nqn.2016-06.io.spdk:cnode23518", 00:09:35.619 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:35.619 "method": "nvmf_create_subsystem", 00:09:35.619 "req_id": 1 00:09:35.619 } 00:09:35.619 Got JSON-RPC error response 00:09:35.619 response: 00:09:35.619 { 00:09:35.619 "code": -32602, 00:09:35.619 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:35.619 }' 00:09:35.619 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:35.619 { 00:09:35.619 "nqn": "nqn.2016-06.io.spdk:cnode23518", 00:09:35.619 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:35.619 "method": "nvmf_create_subsystem", 00:09:35.619 "req_id": 1 00:09:35.619 } 00:09:35.619 Got JSON-RPC error response 00:09:35.619 response: 00:09:35.619 { 00:09:35.619 "code": -32602, 00:09:35.619 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:35.619 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:35.619 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:35.619 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10871 00:09:35.878 [2024-05-15 00:46:22.776263] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10871: invalid model number 'SPDK_Controller' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:35.878 { 00:09:35.878 "nqn": "nqn.2016-06.io.spdk:cnode10871", 00:09:35.878 "model_number": "SPDK_Controller\u001f", 00:09:35.878 "method": "nvmf_create_subsystem", 00:09:35.878 "req_id": 1 00:09:35.878 } 00:09:35.878 Got JSON-RPC error response 00:09:35.878 response: 00:09:35.878 { 00:09:35.878 "code": -32602, 00:09:35.878 "message": "Invalid MN SPDK_Controller\u001f" 00:09:35.878 }' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:35.878 { 00:09:35.878 "nqn": "nqn.2016-06.io.spdk:cnode10871", 00:09:35.878 "model_number": "SPDK_Controller\u001f", 00:09:35.878 "method": "nvmf_create_subsystem", 00:09:35.878 "req_id": 1 00:09:35.878 } 00:09:35.878 Got JSON-RPC error response 00:09:35.878 response: 00:09:35.878 { 00:09:35.878 "code": -32602, 00:09:35.878 "message": "Invalid MN SPDK_Controller\u001f" 00:09:35.878 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:35.878 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ F == \- ]] 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Fi3r#,sjR3dh_xJF"k,# ' 00:09:35.879 00:46:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Fi3r#,sjR3dh_xJF"k,# ' nqn.2016-06.io.spdk:cnode5561 00:09:36.137 [2024-05-15 00:46:23.153548] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5561: invalid serial number 'Fi3r#,sjR3dh_xJF"k,# ' 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:36.137 { 00:09:36.137 "nqn": "nqn.2016-06.io.spdk:cnode5561", 00:09:36.137 "serial_number": "Fi3r#,sjR3dh_xJF\"k,# ", 00:09:36.137 "method": "nvmf_create_subsystem", 00:09:36.137 "req_id": 1 00:09:36.137 } 00:09:36.137 Got JSON-RPC error response 00:09:36.137 response: 00:09:36.137 { 00:09:36.137 "code": -32602, 00:09:36.137 "message": "Invalid SN Fi3r#,sjR3dh_xJF\"k,# " 00:09:36.137 }' 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:36.137 { 00:09:36.137 "nqn": "nqn.2016-06.io.spdk:cnode5561", 00:09:36.137 "serial_number": "Fi3r#,sjR3dh_xJF\"k,# ", 00:09:36.137 "method": "nvmf_create_subsystem", 00:09:36.137 "req_id": 1 00:09:36.137 } 00:09:36.137 Got JSON-RPC error response 00:09:36.137 response: 00:09:36.137 { 00:09:36.137 "code": -32602, 00:09:36.137 "message": "Invalid SN Fi3r#,sjR3dh_xJF\"k,# " 00:09:36.137 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.137 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.396 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '$DCmdY-3jdYftC|btmO7Xz5K!3j4UKAjw^1)SMNWR' 00:09:36.397 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$DCmdY-3jdYftC|btmO7Xz5K!3j4UKAjw^1)SMNWR' nqn.2016-06.io.spdk:cnode28793 00:09:36.655 [2024-05-15 00:46:23.582916] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28793: invalid model number '$DCmdY-3jdYftC|btmO7Xz5K!3j4UKAjw^1)SMNWR' 00:09:36.655 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:36.655 { 00:09:36.655 "nqn": "nqn.2016-06.io.spdk:cnode28793", 00:09:36.655 "model_number": "$DCmdY-3jdYftC|btmO7Xz5K!3j4UKAjw^1)SMNWR", 00:09:36.655 "method": "nvmf_create_subsystem", 00:09:36.655 "req_id": 1 00:09:36.655 } 00:09:36.655 Got JSON-RPC error response 00:09:36.655 response: 00:09:36.655 { 00:09:36.655 "code": -32602, 00:09:36.655 "message": "Invalid MN $DCmdY-3jdYftC|btmO7Xz5K!3j4UKAjw^1)SMNWR" 00:09:36.655 }' 00:09:36.655 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:36.655 { 00:09:36.655 "nqn": "nqn.2016-06.io.spdk:cnode28793", 00:09:36.655 "model_number": "$DCmdY-3jdYftC|btmO7Xz5K!3j4UKAjw^1)SMNWR", 00:09:36.655 "method": "nvmf_create_subsystem", 00:09:36.655 "req_id": 1 00:09:36.655 } 00:09:36.655 Got JSON-RPC error response 00:09:36.655 response: 00:09:36.655 { 00:09:36.655 "code": -32602, 00:09:36.655 "message": "Invalid MN $DCmdY-3jdYftC|btmO7Xz5K!3j4UKAjw^1)SMNWR" 00:09:36.655 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:36.655 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:36.912 [2024-05-15 00:46:23.875958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.912 00:46:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:37.170 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:37.170 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:37.170 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:37.170 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:37.170 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:37.735 [2024-05-15 00:46:24.505910] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:37.735 [2024-05-15 00:46:24.506014] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:37.735 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:37.735 { 00:09:37.735 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:37.735 "listen_address": { 00:09:37.735 "trtype": "tcp", 00:09:37.735 "traddr": "", 00:09:37.735 "trsvcid": "4421" 00:09:37.735 }, 00:09:37.735 "method": "nvmf_subsystem_remove_listener", 00:09:37.735 "req_id": 1 00:09:37.735 } 00:09:37.735 Got JSON-RPC error response 00:09:37.735 response: 00:09:37.735 { 00:09:37.735 "code": -32602, 00:09:37.735 "message": "Invalid parameters" 00:09:37.735 }' 00:09:37.735 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:37.735 { 00:09:37.735 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:37.735 "listen_address": { 00:09:37.735 "trtype": "tcp", 00:09:37.735 "traddr": "", 00:09:37.735 "trsvcid": "4421" 00:09:37.735 }, 00:09:37.735 "method": "nvmf_subsystem_remove_listener", 00:09:37.735 "req_id": 1 00:09:37.735 } 00:09:37.735 Got JSON-RPC error response 00:09:37.735 response: 00:09:37.735 { 00:09:37.735 "code": -32602, 00:09:37.735 "message": "Invalid parameters" 00:09:37.736 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:37.736 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3322 -i 0 00:09:37.736 [2024-05-15 00:46:24.742722] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3322: invalid cntlid range [0-65519] 00:09:37.736 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:37.736 { 00:09:37.736 "nqn": "nqn.2016-06.io.spdk:cnode3322", 00:09:37.736 "min_cntlid": 0, 00:09:37.736 "method": "nvmf_create_subsystem", 00:09:37.736 "req_id": 1 00:09:37.736 } 00:09:37.736 Got JSON-RPC error response 00:09:37.736 response: 00:09:37.736 { 00:09:37.736 "code": -32602, 00:09:37.736 "message": "Invalid cntlid range [0-65519]" 00:09:37.736 }' 00:09:37.736 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:37.736 { 00:09:37.736 "nqn": "nqn.2016-06.io.spdk:cnode3322", 00:09:37.736 "min_cntlid": 0, 00:09:37.736 "method": "nvmf_create_subsystem", 00:09:37.736 "req_id": 1 00:09:37.736 } 00:09:37.736 Got JSON-RPC error response 00:09:37.736 response: 00:09:37.736 { 00:09:37.736 "code": -32602, 00:09:37.736 "message": "Invalid cntlid range [0-65519]" 00:09:37.736 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.736 00:46:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18887 -i 65520 00:09:37.993 [2024-05-15 00:46:24.987579] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18887: invalid cntlid range [65520-65519] 00:09:37.993 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:37.993 { 00:09:37.993 "nqn": "nqn.2016-06.io.spdk:cnode18887", 00:09:37.993 "min_cntlid": 65520, 00:09:37.993 "method": "nvmf_create_subsystem", 00:09:37.993 "req_id": 1 00:09:37.993 } 00:09:37.993 Got JSON-RPC error response 00:09:37.993 response: 00:09:37.993 { 00:09:37.993 "code": -32602, 00:09:37.993 "message": "Invalid cntlid range [65520-65519]" 00:09:37.993 }' 00:09:37.993 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:37.993 { 00:09:37.993 "nqn": "nqn.2016-06.io.spdk:cnode18887", 00:09:37.993 "min_cntlid": 65520, 00:09:37.993 "method": "nvmf_create_subsystem", 00:09:37.993 "req_id": 1 00:09:37.993 } 00:09:37.993 Got JSON-RPC error response 00:09:37.993 response: 00:09:37.993 { 00:09:37.993 "code": -32602, 00:09:37.993 "message": "Invalid cntlid range [65520-65519]" 00:09:37.993 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.993 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5226 -I 0 00:09:38.251 [2024-05-15 00:46:25.228360] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5226: invalid cntlid range [1-0] 00:09:38.251 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:38.251 { 00:09:38.251 "nqn": "nqn.2016-06.io.spdk:cnode5226", 00:09:38.251 "max_cntlid": 0, 00:09:38.251 "method": "nvmf_create_subsystem", 00:09:38.251 "req_id": 1 00:09:38.251 } 00:09:38.251 Got JSON-RPC error response 00:09:38.251 response: 00:09:38.251 { 00:09:38.251 "code": -32602, 00:09:38.251 "message": "Invalid cntlid range [1-0]" 00:09:38.251 }' 00:09:38.251 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:38.251 { 00:09:38.251 "nqn": "nqn.2016-06.io.spdk:cnode5226", 00:09:38.251 "max_cntlid": 0, 00:09:38.251 "method": "nvmf_create_subsystem", 00:09:38.251 "req_id": 1 00:09:38.251 } 00:09:38.251 Got JSON-RPC error response 00:09:38.251 response: 00:09:38.251 { 00:09:38.251 "code": -32602, 00:09:38.251 "message": "Invalid cntlid range [1-0]" 00:09:38.251 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:38.251 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2476 -I 65520 00:09:38.509 [2024-05-15 00:46:25.469164] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2476: invalid cntlid range [1-65520] 00:09:38.509 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:38.509 { 00:09:38.509 "nqn": "nqn.2016-06.io.spdk:cnode2476", 00:09:38.509 "max_cntlid": 65520, 00:09:38.509 "method": "nvmf_create_subsystem", 00:09:38.509 "req_id": 1 00:09:38.509 } 00:09:38.509 Got JSON-RPC error response 00:09:38.509 response: 00:09:38.509 { 00:09:38.509 "code": -32602, 00:09:38.509 "message": "Invalid cntlid range [1-65520]" 00:09:38.509 }' 00:09:38.509 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:38.509 { 00:09:38.509 "nqn": "nqn.2016-06.io.spdk:cnode2476", 00:09:38.509 "max_cntlid": 65520, 00:09:38.509 "method": "nvmf_create_subsystem", 00:09:38.509 "req_id": 1 00:09:38.509 } 00:09:38.509 Got JSON-RPC error response 00:09:38.509 response: 00:09:38.509 { 00:09:38.509 "code": -32602, 00:09:38.509 "message": "Invalid cntlid range [1-65520]" 00:09:38.509 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:38.509 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3207 -i 6 -I 5 00:09:38.767 [2024-05-15 00:46:25.713995] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3207: invalid cntlid range [6-5] 00:09:38.767 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:38.767 { 00:09:38.767 "nqn": "nqn.2016-06.io.spdk:cnode3207", 00:09:38.767 "min_cntlid": 6, 00:09:38.767 "max_cntlid": 5, 00:09:38.767 "method": "nvmf_create_subsystem", 00:09:38.767 "req_id": 1 00:09:38.767 } 00:09:38.767 Got JSON-RPC error response 00:09:38.767 response: 00:09:38.767 { 00:09:38.767 "code": -32602, 00:09:38.767 "message": "Invalid cntlid range [6-5]" 00:09:38.767 }' 00:09:38.767 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:38.767 { 00:09:38.767 "nqn": "nqn.2016-06.io.spdk:cnode3207", 00:09:38.767 "min_cntlid": 6, 00:09:38.767 "max_cntlid": 5, 00:09:38.767 "method": "nvmf_create_subsystem", 00:09:38.767 "req_id": 1 00:09:38.767 } 00:09:38.767 Got JSON-RPC error response 00:09:38.767 response: 00:09:38.767 { 00:09:38.767 "code": -32602, 00:09:38.767 "message": "Invalid cntlid range [6-5]" 00:09:38.767 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:38.767 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:39.025 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:39.025 { 00:09:39.026 "name": "foobar", 00:09:39.026 "method": "nvmf_delete_target", 00:09:39.026 "req_id": 1 00:09:39.026 } 00:09:39.026 Got JSON-RPC error response 00:09:39.026 response: 00:09:39.026 { 00:09:39.026 "code": -32602, 00:09:39.026 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:39.026 }' 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:39.026 { 00:09:39.026 "name": "foobar", 00:09:39.026 "method": "nvmf_delete_target", 00:09:39.026 "req_id": 1 00:09:39.026 } 00:09:39.026 Got JSON-RPC error response 00:09:39.026 response: 00:09:39.026 { 00:09:39.026 "code": -32602, 00:09:39.026 "message": "The specified target doesn't exist, cannot delete it." 00:09:39.026 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:39.026 rmmod nvme_tcp 00:09:39.026 rmmod nvme_fabrics 00:09:39.026 rmmod nvme_keyring 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3960447 ']' 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3960447 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3960447 ']' 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3960447 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3960447 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3960447' 00:09:39.026 killing process with pid 3960447 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3960447 00:09:39.026 [2024-05-15 00:46:25.935131] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:39.026 00:46:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3960447 00:09:39.286 00:46:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:39.286 00:46:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:39.286 00:46:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:39.286 00:46:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:39.286 00:46:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:39.286 00:46:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.286 00:46:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.286 00:46:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.193 00:46:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:41.193 00:09:41.193 real 0m8.501s 00:09:41.193 user 0m21.000s 00:09:41.193 sys 0m2.184s 00:09:41.193 00:46:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:41.193 00:46:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:41.193 ************************************ 00:09:41.193 END TEST nvmf_invalid 00:09:41.193 ************************************ 00:09:41.193 00:46:28 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:41.193 00:46:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:41.193 00:46:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.193 00:46:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.452 ************************************ 00:09:41.452 START TEST nvmf_abort 00:09:41.452 ************************************ 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:41.452 * Looking for test storage... 00:09:41.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.452 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.453 00:46:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:43.358 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:43.358 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:43.358 Found net devices under 0000:08:00.0: cvl_0_0 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.358 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:43.359 Found net devices under 0000:08:00.1: cvl_0_1 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:09:43.359 00:09:43.359 --- 10.0.0.2 ping statistics --- 00:09:43.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.359 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:09:43.359 00:09:43.359 --- 10.0.0.1 ping statistics --- 00:09:43.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.359 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3962438 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3962438 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3962438 ']' 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:43.359 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.359 [2024-05-15 00:46:30.230067] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:43.359 [2024-05-15 00:46:30.230154] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.359 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.359 [2024-05-15 00:46:30.309663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.617 [2024-05-15 00:46:30.462503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.617 [2024-05-15 00:46:30.462581] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.617 [2024-05-15 00:46:30.462611] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.617 [2024-05-15 00:46:30.462636] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.617 [2024-05-15 00:46:30.462660] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.617 [2024-05-15 00:46:30.462969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.617 [2024-05-15 00:46:30.463030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.617 [2024-05-15 00:46:30.463042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.617 [2024-05-15 00:46:30.623648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.617 Malloc0 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.617 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.876 Delay0 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.876 [2024-05-15 00:46:30.696155] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:43.876 [2024-05-15 00:46:30.696447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.876 00:46:30 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:43.876 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.876 [2024-05-15 00:46:30.802437] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:46.405 Initializing NVMe Controllers 00:09:46.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:46.405 controller IO queue size 128 less than required 00:09:46.405 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:46.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:46.405 Initialization complete. Launching workers. 00:09:46.405 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28856 00:09:46.405 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28917, failed to submit 62 00:09:46.405 success 28860, unsuccess 57, failed 0 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.405 rmmod nvme_tcp 00:09:46.405 rmmod nvme_fabrics 00:09:46.405 rmmod nvme_keyring 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3962438 ']' 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3962438 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3962438 ']' 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3962438 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:09:46.405 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:46.406 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3962438 00:09:46.406 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:46.406 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:46.406 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3962438' 00:09:46.406 killing process with pid 3962438 00:09:46.406 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3962438 00:09:46.406 [2024-05-15 00:46:32.984251] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:46.406 00:46:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3962438 00:09:46.406 00:46:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.406 00:46:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.406 00:46:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.406 00:46:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.406 00:46:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.406 00:46:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.406 00:46:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.406 00:46:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.310 00:46:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:48.310 00:09:48.310 real 0m7.000s 00:09:48.310 user 0m10.249s 00:09:48.310 sys 0m2.408s 00:09:48.310 00:46:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:48.310 00:46:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.310 ************************************ 00:09:48.310 END TEST nvmf_abort 00:09:48.310 ************************************ 00:09:48.310 00:46:35 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:48.310 00:46:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:48.310 00:46:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:48.310 00:46:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:48.310 ************************************ 00:09:48.310 START TEST nvmf_ns_hotplug_stress 00:09:48.310 ************************************ 00:09:48.310 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:48.568 * Looking for test storage... 00:09:48.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.568 00:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.944 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.944 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.944 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.944 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.944 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.944 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:49.945 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:49.945 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:49.945 Found net devices under 0000:08:00.0: cvl_0_0 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:49.945 Found net devices under 0000:08:00.1: cvl_0_1 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.945 00:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:09:50.203 00:09:50.203 --- 10.0.0.2 ping statistics --- 00:09:50.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.203 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:09:50.203 00:09:50.203 --- 10.0.0.1 ping statistics --- 00:09:50.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.203 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3964243 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3964243 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3964243 ']' 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:50.203 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.203 [2024-05-15 00:46:37.166553] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:09:50.203 [2024-05-15 00:46:37.166650] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.203 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.203 [2024-05-15 00:46:37.232248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.461 [2024-05-15 00:46:37.349824] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.461 [2024-05-15 00:46:37.349885] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.461 [2024-05-15 00:46:37.349901] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.462 [2024-05-15 00:46:37.349914] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.462 [2024-05-15 00:46:37.349925] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.462 [2024-05-15 00:46:37.350019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.462 [2024-05-15 00:46:37.350099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.462 [2024-05-15 00:46:37.350130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.462 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:50.462 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:09:50.462 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.462 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.462 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.462 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.462 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:50.462 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:50.720 [2024-05-15 00:46:37.753794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.977 00:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:51.235 00:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.493 [2024-05-15 00:46:38.341090] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:51.493 [2024-05-15 00:46:38.341358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.493 00:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:51.751 00:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:52.008 Malloc0 00:09:52.008 00:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:52.266 Delay0 00:09:52.266 00:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.524 00:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:52.781 NULL1 00:09:52.781 00:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:53.039 00:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3964478 00:09:53.039 00:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:53.039 00:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:09:53.039 00:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.039 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.411 Read completed with error (sct=0, sc=11) 00:09:54.411 00:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.411 00:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:54.411 00:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:54.669 true 00:09:54.669 00:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:09:54.669 00:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.602 00:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.859 00:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:55.859 00:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:56.118 true 00:09:56.118 00:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:09:56.118 00:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.377 00:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.686 00:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:56.686 00:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:56.963 true 00:09:56.963 00:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:09:56.963 00:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.221 00:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.479 00:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:57.479 00:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:57.738 true 00:09:57.738 00:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:09:57.738 00:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.996 00:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.563 00:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:58.563 00:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:58.563 true 00:09:58.821 00:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:09:58.821 00:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.759 00:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.759 00:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:59.759 00:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:00.328 true 00:10:00.328 00:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:00.328 00:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.587 00:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.845 00:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:00.845 00:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:01.104 true 00:10:01.104 00:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:01.104 00:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.362 00:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.620 00:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:01.620 00:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:01.878 true 00:10:01.878 00:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:01.878 00:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.815 00:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.074 00:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:03.074 00:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:03.336 true 00:10:03.336 00:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:03.336 00:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.595 00:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.853 00:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:03.853 00:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:04.112 true 00:10:04.112 00:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:04.112 00:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.370 00:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.938 00:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:04.938 00:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:04.938 true 00:10:04.938 00:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:04.938 00:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.877 00:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.393 00:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:06.393 00:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:06.650 true 00:10:06.650 00:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:06.650 00:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.907 00:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.165 00:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:07.165 00:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:07.423 true 00:10:07.423 00:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:07.423 00:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.681 00:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.939 00:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:07.939 00:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:08.197 true 00:10:08.197 00:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:08.197 00:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.129 00:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.386 00:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:09.386 00:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:09.644 true 00:10:09.644 00:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:09.644 00:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.902 00:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.468 00:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:10.468 00:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:10.468 true 00:10:10.468 00:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:10.468 00:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.726 00:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.983 00:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:10.983 00:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:11.241 true 00:10:11.241 00:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:11.241 00:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.175 00:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.756 00:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:12.756 00:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:12.756 true 00:10:12.756 00:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:12.756 00:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.321 00:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.578 00:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:13.578 00:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:13.836 true 00:10:13.836 00:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:13.836 00:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.094 00:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.351 00:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:14.352 00:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:14.609 true 00:10:14.609 00:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:14.609 00:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.867 00:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.124 00:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:15.124 00:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:15.689 true 00:10:15.689 00:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:15.689 00:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.255 00:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.770 00:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:16.770 00:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:17.027 true 00:10:17.027 00:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:17.027 00:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.283 00:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.540 00:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:17.540 00:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:17.799 true 00:10:17.799 00:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:17.799 00:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.056 00:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.314 00:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:18.314 00:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:18.571 true 00:10:18.828 00:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:18.828 00:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.760 00:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.760 00:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:19.760 00:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:20.017 true 00:10:20.275 00:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:20.275 00:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.532 00:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.532 00:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:20.532 00:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:20.790 true 00:10:20.790 00:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:20.790 00:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.723 00:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.981 00:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:21.981 00:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:22.239 true 00:10:22.497 00:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:22.497 00:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.755 00:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.755 00:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:22.755 00:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:23.013 true 00:10:23.013 00:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:23.013 00:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.946 Initializing NVMe Controllers 00:10:23.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:23.946 Controller IO queue size 128, less than required. 00:10:23.946 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:23.946 Controller IO queue size 128, less than required. 00:10:23.946 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:23.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:23.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:23.946 Initialization complete. Launching workers. 00:10:23.946 ======================================================== 00:10:23.946 Latency(us) 00:10:23.946 Device Information : IOPS MiB/s Average min max 00:10:23.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 568.40 0.28 92236.05 2761.84 1029763.99 00:10:23.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5733.47 2.80 22326.43 6091.81 533820.72 00:10:23.946 ======================================================== 00:10:23.946 Total : 6301.86 3.08 28631.96 2761.84 1029763.99 00:10:23.946 00:10:23.946 00:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.204 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:24.204 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:24.462 true 00:10:24.462 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3964478 00:10:24.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3964478) - No such process 00:10:24.462 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3964478 00:10:24.462 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.733 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:24.994 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:24.994 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:24.994 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:24.994 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:24.994 00:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:25.252 null0 00:10:25.252 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:25.252 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:25.252 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:25.509 null1 00:10:25.509 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:25.509 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:25.509 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:25.765 null2 00:10:25.765 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:25.765 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:25.765 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:26.022 null3 00:10:26.022 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:26.022 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:26.022 00:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:26.279 null4 00:10:26.279 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:26.279 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:26.279 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:26.537 null5 00:10:26.537 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:26.537 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:26.537 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:26.794 null6 00:10:26.794 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:26.794 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:26.794 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:27.053 null7 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3967758 3967759 3967761 3967763 3967765 3967767 3967769 3967771 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.053 00:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.312 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.312 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:27.312 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.312 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.312 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.312 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.312 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.312 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:27.571 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:27.829 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:27.829 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:27.829 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.829 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:27.829 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:27.829 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:27.829 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:27.829 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.087 00:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:28.087 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.087 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.087 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.087 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.087 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.087 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:28.344 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.344 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.344 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.344 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.344 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.344 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.344 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.344 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:28.603 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:28.859 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.859 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.859 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:28.860 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:28.860 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:28.860 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:28.860 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:28.860 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.117 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.117 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.117 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:29.117 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.117 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.117 00:47:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.117 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:29.374 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:29.374 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:29.374 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:29.374 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:29.374 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:29.374 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.374 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.632 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.890 00:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.148 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.406 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.406 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.406 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.406 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.406 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.406 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.406 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.406 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:30.406 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.663 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.920 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:31.178 00:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.178 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.435 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.694 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.952 00:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.210 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.468 rmmod nvme_tcp 00:10:32.468 rmmod nvme_fabrics 00:10:32.468 rmmod nvme_keyring 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3964243 ']' 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3964243 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3964243 ']' 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3964243 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3964243 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3964243' 00:10:32.468 killing process with pid 3964243 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3964243 00:10:32.468 [2024-05-15 00:47:19.525073] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:32.468 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3964243 00:10:32.727 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.727 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.727 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.727 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.727 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.727 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.727 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.727 00:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.264 00:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:35.264 00:10:35.264 real 0m46.482s 00:10:35.264 user 3m32.493s 00:10:35.264 sys 0m16.480s 00:10:35.264 00:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:35.264 00:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.264 ************************************ 00:10:35.264 END TEST nvmf_ns_hotplug_stress 00:10:35.264 ************************************ 00:10:35.264 00:47:21 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:35.264 00:47:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:35.264 00:47:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:35.264 00:47:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.264 ************************************ 00:10:35.264 START TEST nvmf_connect_stress 00:10:35.264 ************************************ 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:35.264 * Looking for test storage... 00:10:35.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.264 00:47:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:36.639 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:36.639 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:36.639 Found net devices under 0000:08:00.0: cvl_0_0 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:36.639 Found net devices under 0000:08:00.1: cvl_0_1 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:36.639 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:10:36.640 00:10:36.640 --- 10.0.0.2 ping statistics --- 00:10:36.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.640 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:10:36.640 00:10:36.640 --- 10.0.0.1 ping statistics --- 00:10:36.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.640 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3969923 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3969923 00:10:36.640 00:47:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:36.897 00:47:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3969923 ']' 00:10:36.897 00:47:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.897 00:47:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:36.897 00:47:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.897 00:47:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:36.897 00:47:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.897 [2024-05-15 00:47:23.744908] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:36.897 [2024-05-15 00:47:23.745019] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.897 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.897 [2024-05-15 00:47:23.812153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:36.897 [2024-05-15 00:47:23.931519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.897 [2024-05-15 00:47:23.931582] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.897 [2024-05-15 00:47:23.931598] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.897 [2024-05-15 00:47:23.931612] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.897 [2024-05-15 00:47:23.931624] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.897 [2024-05-15 00:47:23.933970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.897 [2024-05-15 00:47:23.934059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.897 [2024-05-15 00:47:23.934092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.155 [2024-05-15 00:47:24.071229] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.155 [2024-05-15 00:47:24.088731] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:37.155 [2024-05-15 00:47:24.106089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.155 NULL1 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3969947 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.155 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.721 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.721 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:37.721 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.721 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.721 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.979 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.979 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:37.979 00:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.979 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.979 00:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.236 00:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.236 00:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:38.237 00:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.237 00:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.237 00:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.494 00:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.494 00:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:38.494 00:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.494 00:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.494 00:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.752 00:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.752 00:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:38.752 00:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.752 00:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.752 00:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.319 00:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.319 00:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:39.319 00:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.319 00:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.319 00:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.577 00:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.577 00:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:39.577 00:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.577 00:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.577 00:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.835 00:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.835 00:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:39.835 00:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.835 00:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.835 00:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.092 00:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.092 00:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:40.092 00:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.092 00:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.092 00:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.350 00:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.350 00:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:40.350 00:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.350 00:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.350 00:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.915 00:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.915 00:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:40.915 00:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.915 00:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.915 00:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.173 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.173 00:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:41.173 00:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.173 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.173 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.430 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.430 00:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:41.430 00:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.430 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.430 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.688 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.688 00:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:41.688 00:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.688 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.688 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.945 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.945 00:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:41.945 00:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.945 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.945 00:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.510 00:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.510 00:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:42.510 00:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.510 00:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.510 00:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.767 00:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.767 00:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:42.767 00:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.767 00:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.767 00:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.026 00:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.026 00:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:43.026 00:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.026 00:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.026 00:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.283 00:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.283 00:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:43.283 00:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.283 00:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.283 00:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 00:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.541 00:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:43.541 00:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.541 00:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.541 00:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.106 00:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.106 00:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:44.106 00:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.106 00:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.106 00:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.363 00:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.363 00:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:44.363 00:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.363 00:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.363 00:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.621 00:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.621 00:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:44.621 00:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.621 00:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.621 00:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.879 00:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.879 00:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:44.879 00:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.879 00:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.879 00:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.445 00:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.445 00:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:45.445 00:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.445 00:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.445 00:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.703 00:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.703 00:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:45.703 00:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.703 00:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.703 00:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.960 00:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.960 00:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:45.960 00:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.960 00:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.960 00:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.218 00:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.218 00:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:46.218 00:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.218 00:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.218 00:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.478 00:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.478 00:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:46.478 00:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.478 00:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.478 00:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.043 00:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.043 00:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:47.043 00:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.043 00:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.043 00:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.301 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.301 00:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:47.301 00:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.301 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.301 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.301 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3969947 00:10:47.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3969947) - No such process 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3969947 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:47.559 rmmod nvme_tcp 00:10:47.559 rmmod nvme_fabrics 00:10:47.559 rmmod nvme_keyring 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3969923 ']' 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3969923 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3969923 ']' 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3969923 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3969923 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3969923' 00:10:47.559 killing process with pid 3969923 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3969923 00:10:47.559 [2024-05-15 00:47:34.526777] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:47.559 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3969923 00:10:47.818 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:47.818 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:47.818 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:47.818 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.818 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:47.818 00:47:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.818 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.818 00:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.356 00:47:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:50.356 00:10:50.356 real 0m14.925s 00:10:50.356 user 0m38.260s 00:10:50.356 sys 0m5.416s 00:10:50.356 00:47:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:50.356 00:47:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.356 ************************************ 00:10:50.356 END TEST nvmf_connect_stress 00:10:50.356 ************************************ 00:10:50.356 00:47:36 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:50.356 00:47:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:50.356 00:47:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:50.356 00:47:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.356 ************************************ 00:10:50.356 START TEST nvmf_fused_ordering 00:10:50.356 ************************************ 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:50.356 * Looking for test storage... 00:10:50.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:50.356 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:50.357 00:47:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:51.734 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:51.734 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.734 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:51.735 Found net devices under 0000:08:00.0: cvl_0_0 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:51.735 Found net devices under 0000:08:00.1: cvl_0_1 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:10:51.735 00:10:51.735 --- 10.0.0.2 ping statistics --- 00:10:51.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.735 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:51.735 00:10:51.735 --- 10.0.0.1 ping statistics --- 00:10:51.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.735 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3972371 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3972371 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3972371 ']' 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:51.735 00:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:51.735 [2024-05-15 00:47:38.768619] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:51.735 [2024-05-15 00:47:38.768719] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.993 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.993 [2024-05-15 00:47:38.836687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.993 [2024-05-15 00:47:38.954894] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.994 [2024-05-15 00:47:38.954964] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.994 [2024-05-15 00:47:38.954982] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.994 [2024-05-15 00:47:38.954995] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.994 [2024-05-15 00:47:38.955007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.994 [2024-05-15 00:47:38.955043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 [2024-05-15 00:47:39.093294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 [2024-05-15 00:47:39.109226] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:52.252 [2024-05-15 00:47:39.109487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 NULL1 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.252 00:47:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:52.252 [2024-05-15 00:47:39.155245] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:10:52.252 [2024-05-15 00:47:39.155292] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972481 ] 00:10:52.252 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.840 Attached to nqn.2016-06.io.spdk:cnode1 00:10:52.840 Namespace ID: 1 size: 1GB 00:10:52.840 fused_ordering(0) 00:10:52.840 fused_ordering(1) 00:10:52.840 fused_ordering(2) 00:10:52.840 fused_ordering(3) 00:10:52.840 fused_ordering(4) 00:10:52.840 fused_ordering(5) 00:10:52.840 fused_ordering(6) 00:10:52.840 fused_ordering(7) 00:10:52.840 fused_ordering(8) 00:10:52.840 fused_ordering(9) 00:10:52.840 fused_ordering(10) 00:10:52.840 fused_ordering(11) 00:10:52.840 fused_ordering(12) 00:10:52.840 fused_ordering(13) 00:10:52.840 fused_ordering(14) 00:10:52.840 fused_ordering(15) 00:10:52.840 fused_ordering(16) 00:10:52.840 fused_ordering(17) 00:10:52.840 fused_ordering(18) 00:10:52.840 fused_ordering(19) 00:10:52.840 fused_ordering(20) 00:10:52.840 fused_ordering(21) 00:10:52.840 fused_ordering(22) 00:10:52.840 fused_ordering(23) 00:10:52.840 fused_ordering(24) 00:10:52.840 fused_ordering(25) 00:10:52.840 fused_ordering(26) 00:10:52.840 fused_ordering(27) 00:10:52.840 fused_ordering(28) 00:10:52.840 fused_ordering(29) 00:10:52.840 fused_ordering(30) 00:10:52.840 fused_ordering(31) 00:10:52.840 fused_ordering(32) 00:10:52.840 fused_ordering(33) 00:10:52.840 fused_ordering(34) 00:10:52.840 fused_ordering(35) 00:10:52.840 fused_ordering(36) 00:10:52.840 fused_ordering(37) 00:10:52.840 fused_ordering(38) 00:10:52.840 fused_ordering(39) 00:10:52.840 fused_ordering(40) 00:10:52.840 fused_ordering(41) 00:10:52.840 fused_ordering(42) 00:10:52.840 fused_ordering(43) 00:10:52.840 fused_ordering(44) 00:10:52.840 fused_ordering(45) 00:10:52.840 fused_ordering(46) 00:10:52.840 fused_ordering(47) 00:10:52.840 fused_ordering(48) 00:10:52.840 fused_ordering(49) 00:10:52.840 fused_ordering(50) 00:10:52.840 fused_ordering(51) 00:10:52.840 fused_ordering(52) 00:10:52.840 fused_ordering(53) 00:10:52.840 fused_ordering(54) 00:10:52.840 fused_ordering(55) 00:10:52.840 fused_ordering(56) 00:10:52.840 fused_ordering(57) 00:10:52.840 fused_ordering(58) 00:10:52.840 fused_ordering(59) 00:10:52.840 fused_ordering(60) 00:10:52.840 fused_ordering(61) 00:10:52.840 fused_ordering(62) 00:10:52.840 fused_ordering(63) 00:10:52.840 fused_ordering(64) 00:10:52.840 fused_ordering(65) 00:10:52.840 fused_ordering(66) 00:10:52.840 fused_ordering(67) 00:10:52.840 fused_ordering(68) 00:10:52.840 fused_ordering(69) 00:10:52.840 fused_ordering(70) 00:10:52.840 fused_ordering(71) 00:10:52.840 fused_ordering(72) 00:10:52.840 fused_ordering(73) 00:10:52.840 fused_ordering(74) 00:10:52.840 fused_ordering(75) 00:10:52.840 fused_ordering(76) 00:10:52.840 fused_ordering(77) 00:10:52.840 fused_ordering(78) 00:10:52.840 fused_ordering(79) 00:10:52.840 fused_ordering(80) 00:10:52.840 fused_ordering(81) 00:10:52.840 fused_ordering(82) 00:10:52.840 fused_ordering(83) 00:10:52.840 fused_ordering(84) 00:10:52.840 fused_ordering(85) 00:10:52.840 fused_ordering(86) 00:10:52.840 fused_ordering(87) 00:10:52.840 fused_ordering(88) 00:10:52.840 fused_ordering(89) 00:10:52.840 fused_ordering(90) 00:10:52.840 fused_ordering(91) 00:10:52.840 fused_ordering(92) 00:10:52.840 fused_ordering(93) 00:10:52.840 fused_ordering(94) 00:10:52.840 fused_ordering(95) 00:10:52.840 fused_ordering(96) 00:10:52.840 fused_ordering(97) 00:10:52.840 fused_ordering(98) 00:10:52.840 fused_ordering(99) 00:10:52.840 fused_ordering(100) 00:10:52.840 fused_ordering(101) 00:10:52.840 fused_ordering(102) 00:10:52.840 fused_ordering(103) 00:10:52.840 fused_ordering(104) 00:10:52.840 fused_ordering(105) 00:10:52.840 fused_ordering(106) 00:10:52.840 fused_ordering(107) 00:10:52.840 fused_ordering(108) 00:10:52.840 fused_ordering(109) 00:10:52.840 fused_ordering(110) 00:10:52.840 fused_ordering(111) 00:10:52.840 fused_ordering(112) 00:10:52.840 fused_ordering(113) 00:10:52.840 fused_ordering(114) 00:10:52.840 fused_ordering(115) 00:10:52.840 fused_ordering(116) 00:10:52.840 fused_ordering(117) 00:10:52.840 fused_ordering(118) 00:10:52.840 fused_ordering(119) 00:10:52.840 fused_ordering(120) 00:10:52.840 fused_ordering(121) 00:10:52.840 fused_ordering(122) 00:10:52.840 fused_ordering(123) 00:10:52.840 fused_ordering(124) 00:10:52.840 fused_ordering(125) 00:10:52.840 fused_ordering(126) 00:10:52.840 fused_ordering(127) 00:10:52.840 fused_ordering(128) 00:10:52.840 fused_ordering(129) 00:10:52.840 fused_ordering(130) 00:10:52.840 fused_ordering(131) 00:10:52.840 fused_ordering(132) 00:10:52.840 fused_ordering(133) 00:10:52.840 fused_ordering(134) 00:10:52.840 fused_ordering(135) 00:10:52.840 fused_ordering(136) 00:10:52.840 fused_ordering(137) 00:10:52.840 fused_ordering(138) 00:10:52.840 fused_ordering(139) 00:10:52.840 fused_ordering(140) 00:10:52.840 fused_ordering(141) 00:10:52.840 fused_ordering(142) 00:10:52.840 fused_ordering(143) 00:10:52.840 fused_ordering(144) 00:10:52.840 fused_ordering(145) 00:10:52.840 fused_ordering(146) 00:10:52.840 fused_ordering(147) 00:10:52.840 fused_ordering(148) 00:10:52.840 fused_ordering(149) 00:10:52.840 fused_ordering(150) 00:10:52.840 fused_ordering(151) 00:10:52.840 fused_ordering(152) 00:10:52.840 fused_ordering(153) 00:10:52.840 fused_ordering(154) 00:10:52.840 fused_ordering(155) 00:10:52.840 fused_ordering(156) 00:10:52.840 fused_ordering(157) 00:10:52.840 fused_ordering(158) 00:10:52.840 fused_ordering(159) 00:10:52.840 fused_ordering(160) 00:10:52.840 fused_ordering(161) 00:10:52.840 fused_ordering(162) 00:10:52.840 fused_ordering(163) 00:10:52.840 fused_ordering(164) 00:10:52.840 fused_ordering(165) 00:10:52.840 fused_ordering(166) 00:10:52.840 fused_ordering(167) 00:10:52.840 fused_ordering(168) 00:10:52.840 fused_ordering(169) 00:10:52.840 fused_ordering(170) 00:10:52.840 fused_ordering(171) 00:10:52.840 fused_ordering(172) 00:10:52.840 fused_ordering(173) 00:10:52.840 fused_ordering(174) 00:10:52.840 fused_ordering(175) 00:10:52.840 fused_ordering(176) 00:10:52.840 fused_ordering(177) 00:10:52.840 fused_ordering(178) 00:10:52.840 fused_ordering(179) 00:10:52.840 fused_ordering(180) 00:10:52.840 fused_ordering(181) 00:10:52.840 fused_ordering(182) 00:10:52.840 fused_ordering(183) 00:10:52.840 fused_ordering(184) 00:10:52.840 fused_ordering(185) 00:10:52.840 fused_ordering(186) 00:10:52.840 fused_ordering(187) 00:10:52.840 fused_ordering(188) 00:10:52.840 fused_ordering(189) 00:10:52.840 fused_ordering(190) 00:10:52.840 fused_ordering(191) 00:10:52.840 fused_ordering(192) 00:10:52.840 fused_ordering(193) 00:10:52.840 fused_ordering(194) 00:10:52.840 fused_ordering(195) 00:10:52.840 fused_ordering(196) 00:10:52.840 fused_ordering(197) 00:10:52.840 fused_ordering(198) 00:10:52.840 fused_ordering(199) 00:10:52.840 fused_ordering(200) 00:10:52.840 fused_ordering(201) 00:10:52.840 fused_ordering(202) 00:10:52.840 fused_ordering(203) 00:10:52.840 fused_ordering(204) 00:10:52.840 fused_ordering(205) 00:10:53.428 fused_ordering(206) 00:10:53.428 fused_ordering(207) 00:10:53.428 fused_ordering(208) 00:10:53.428 fused_ordering(209) 00:10:53.428 fused_ordering(210) 00:10:53.428 fused_ordering(211) 00:10:53.428 fused_ordering(212) 00:10:53.428 fused_ordering(213) 00:10:53.428 fused_ordering(214) 00:10:53.428 fused_ordering(215) 00:10:53.428 fused_ordering(216) 00:10:53.428 fused_ordering(217) 00:10:53.428 fused_ordering(218) 00:10:53.428 fused_ordering(219) 00:10:53.428 fused_ordering(220) 00:10:53.428 fused_ordering(221) 00:10:53.428 fused_ordering(222) 00:10:53.428 fused_ordering(223) 00:10:53.428 fused_ordering(224) 00:10:53.428 fused_ordering(225) 00:10:53.428 fused_ordering(226) 00:10:53.428 fused_ordering(227) 00:10:53.428 fused_ordering(228) 00:10:53.428 fused_ordering(229) 00:10:53.428 fused_ordering(230) 00:10:53.428 fused_ordering(231) 00:10:53.428 fused_ordering(232) 00:10:53.428 fused_ordering(233) 00:10:53.428 fused_ordering(234) 00:10:53.428 fused_ordering(235) 00:10:53.428 fused_ordering(236) 00:10:53.428 fused_ordering(237) 00:10:53.428 fused_ordering(238) 00:10:53.428 fused_ordering(239) 00:10:53.428 fused_ordering(240) 00:10:53.428 fused_ordering(241) 00:10:53.428 fused_ordering(242) 00:10:53.428 fused_ordering(243) 00:10:53.428 fused_ordering(244) 00:10:53.428 fused_ordering(245) 00:10:53.428 fused_ordering(246) 00:10:53.428 fused_ordering(247) 00:10:53.428 fused_ordering(248) 00:10:53.428 fused_ordering(249) 00:10:53.428 fused_ordering(250) 00:10:53.428 fused_ordering(251) 00:10:53.428 fused_ordering(252) 00:10:53.428 fused_ordering(253) 00:10:53.428 fused_ordering(254) 00:10:53.428 fused_ordering(255) 00:10:53.428 fused_ordering(256) 00:10:53.428 fused_ordering(257) 00:10:53.428 fused_ordering(258) 00:10:53.428 fused_ordering(259) 00:10:53.428 fused_ordering(260) 00:10:53.428 fused_ordering(261) 00:10:53.428 fused_ordering(262) 00:10:53.428 fused_ordering(263) 00:10:53.428 fused_ordering(264) 00:10:53.428 fused_ordering(265) 00:10:53.428 fused_ordering(266) 00:10:53.428 fused_ordering(267) 00:10:53.428 fused_ordering(268) 00:10:53.428 fused_ordering(269) 00:10:53.428 fused_ordering(270) 00:10:53.428 fused_ordering(271) 00:10:53.428 fused_ordering(272) 00:10:53.428 fused_ordering(273) 00:10:53.428 fused_ordering(274) 00:10:53.428 fused_ordering(275) 00:10:53.428 fused_ordering(276) 00:10:53.428 fused_ordering(277) 00:10:53.428 fused_ordering(278) 00:10:53.428 fused_ordering(279) 00:10:53.428 fused_ordering(280) 00:10:53.428 fused_ordering(281) 00:10:53.428 fused_ordering(282) 00:10:53.428 fused_ordering(283) 00:10:53.428 fused_ordering(284) 00:10:53.428 fused_ordering(285) 00:10:53.428 fused_ordering(286) 00:10:53.428 fused_ordering(287) 00:10:53.428 fused_ordering(288) 00:10:53.428 fused_ordering(289) 00:10:53.428 fused_ordering(290) 00:10:53.428 fused_ordering(291) 00:10:53.428 fused_ordering(292) 00:10:53.428 fused_ordering(293) 00:10:53.428 fused_ordering(294) 00:10:53.428 fused_ordering(295) 00:10:53.428 fused_ordering(296) 00:10:53.428 fused_ordering(297) 00:10:53.428 fused_ordering(298) 00:10:53.428 fused_ordering(299) 00:10:53.428 fused_ordering(300) 00:10:53.428 fused_ordering(301) 00:10:53.428 fused_ordering(302) 00:10:53.429 fused_ordering(303) 00:10:53.429 fused_ordering(304) 00:10:53.429 fused_ordering(305) 00:10:53.429 fused_ordering(306) 00:10:53.429 fused_ordering(307) 00:10:53.429 fused_ordering(308) 00:10:53.429 fused_ordering(309) 00:10:53.429 fused_ordering(310) 00:10:53.429 fused_ordering(311) 00:10:53.429 fused_ordering(312) 00:10:53.429 fused_ordering(313) 00:10:53.429 fused_ordering(314) 00:10:53.429 fused_ordering(315) 00:10:53.429 fused_ordering(316) 00:10:53.429 fused_ordering(317) 00:10:53.429 fused_ordering(318) 00:10:53.429 fused_ordering(319) 00:10:53.429 fused_ordering(320) 00:10:53.429 fused_ordering(321) 00:10:53.429 fused_ordering(322) 00:10:53.429 fused_ordering(323) 00:10:53.429 fused_ordering(324) 00:10:53.429 fused_ordering(325) 00:10:53.429 fused_ordering(326) 00:10:53.429 fused_ordering(327) 00:10:53.429 fused_ordering(328) 00:10:53.429 fused_ordering(329) 00:10:53.429 fused_ordering(330) 00:10:53.429 fused_ordering(331) 00:10:53.429 fused_ordering(332) 00:10:53.429 fused_ordering(333) 00:10:53.429 fused_ordering(334) 00:10:53.429 fused_ordering(335) 00:10:53.429 fused_ordering(336) 00:10:53.429 fused_ordering(337) 00:10:53.429 fused_ordering(338) 00:10:53.429 fused_ordering(339) 00:10:53.429 fused_ordering(340) 00:10:53.429 fused_ordering(341) 00:10:53.429 fused_ordering(342) 00:10:53.429 fused_ordering(343) 00:10:53.429 fused_ordering(344) 00:10:53.429 fused_ordering(345) 00:10:53.429 fused_ordering(346) 00:10:53.429 fused_ordering(347) 00:10:53.429 fused_ordering(348) 00:10:53.429 fused_ordering(349) 00:10:53.429 fused_ordering(350) 00:10:53.429 fused_ordering(351) 00:10:53.429 fused_ordering(352) 00:10:53.429 fused_ordering(353) 00:10:53.429 fused_ordering(354) 00:10:53.429 fused_ordering(355) 00:10:53.429 fused_ordering(356) 00:10:53.429 fused_ordering(357) 00:10:53.429 fused_ordering(358) 00:10:53.429 fused_ordering(359) 00:10:53.429 fused_ordering(360) 00:10:53.429 fused_ordering(361) 00:10:53.429 fused_ordering(362) 00:10:53.429 fused_ordering(363) 00:10:53.429 fused_ordering(364) 00:10:53.429 fused_ordering(365) 00:10:53.429 fused_ordering(366) 00:10:53.429 fused_ordering(367) 00:10:53.429 fused_ordering(368) 00:10:53.429 fused_ordering(369) 00:10:53.429 fused_ordering(370) 00:10:53.429 fused_ordering(371) 00:10:53.429 fused_ordering(372) 00:10:53.429 fused_ordering(373) 00:10:53.429 fused_ordering(374) 00:10:53.429 fused_ordering(375) 00:10:53.429 fused_ordering(376) 00:10:53.429 fused_ordering(377) 00:10:53.429 fused_ordering(378) 00:10:53.429 fused_ordering(379) 00:10:53.429 fused_ordering(380) 00:10:53.429 fused_ordering(381) 00:10:53.429 fused_ordering(382) 00:10:53.429 fused_ordering(383) 00:10:53.429 fused_ordering(384) 00:10:53.429 fused_ordering(385) 00:10:53.429 fused_ordering(386) 00:10:53.429 fused_ordering(387) 00:10:53.429 fused_ordering(388) 00:10:53.429 fused_ordering(389) 00:10:53.429 fused_ordering(390) 00:10:53.429 fused_ordering(391) 00:10:53.429 fused_ordering(392) 00:10:53.429 fused_ordering(393) 00:10:53.429 fused_ordering(394) 00:10:53.429 fused_ordering(395) 00:10:53.429 fused_ordering(396) 00:10:53.429 fused_ordering(397) 00:10:53.429 fused_ordering(398) 00:10:53.429 fused_ordering(399) 00:10:53.429 fused_ordering(400) 00:10:53.429 fused_ordering(401) 00:10:53.429 fused_ordering(402) 00:10:53.429 fused_ordering(403) 00:10:53.429 fused_ordering(404) 00:10:53.429 fused_ordering(405) 00:10:53.429 fused_ordering(406) 00:10:53.429 fused_ordering(407) 00:10:53.429 fused_ordering(408) 00:10:53.429 fused_ordering(409) 00:10:53.429 fused_ordering(410) 00:10:53.993 fused_ordering(411) 00:10:53.993 fused_ordering(412) 00:10:53.993 fused_ordering(413) 00:10:53.993 fused_ordering(414) 00:10:53.993 fused_ordering(415) 00:10:53.993 fused_ordering(416) 00:10:53.993 fused_ordering(417) 00:10:53.993 fused_ordering(418) 00:10:53.993 fused_ordering(419) 00:10:53.993 fused_ordering(420) 00:10:53.993 fused_ordering(421) 00:10:53.993 fused_ordering(422) 00:10:53.993 fused_ordering(423) 00:10:53.993 fused_ordering(424) 00:10:53.993 fused_ordering(425) 00:10:53.993 fused_ordering(426) 00:10:53.993 fused_ordering(427) 00:10:53.993 fused_ordering(428) 00:10:53.993 fused_ordering(429) 00:10:53.993 fused_ordering(430) 00:10:53.993 fused_ordering(431) 00:10:53.993 fused_ordering(432) 00:10:53.993 fused_ordering(433) 00:10:53.993 fused_ordering(434) 00:10:53.993 fused_ordering(435) 00:10:53.993 fused_ordering(436) 00:10:53.993 fused_ordering(437) 00:10:53.993 fused_ordering(438) 00:10:53.993 fused_ordering(439) 00:10:53.993 fused_ordering(440) 00:10:53.993 fused_ordering(441) 00:10:53.993 fused_ordering(442) 00:10:53.993 fused_ordering(443) 00:10:53.993 fused_ordering(444) 00:10:53.993 fused_ordering(445) 00:10:53.993 fused_ordering(446) 00:10:53.993 fused_ordering(447) 00:10:53.993 fused_ordering(448) 00:10:53.993 fused_ordering(449) 00:10:53.993 fused_ordering(450) 00:10:53.993 fused_ordering(451) 00:10:53.993 fused_ordering(452) 00:10:53.993 fused_ordering(453) 00:10:53.993 fused_ordering(454) 00:10:53.993 fused_ordering(455) 00:10:53.993 fused_ordering(456) 00:10:53.993 fused_ordering(457) 00:10:53.993 fused_ordering(458) 00:10:53.993 fused_ordering(459) 00:10:53.993 fused_ordering(460) 00:10:53.993 fused_ordering(461) 00:10:53.993 fused_ordering(462) 00:10:53.993 fused_ordering(463) 00:10:53.993 fused_ordering(464) 00:10:53.993 fused_ordering(465) 00:10:53.993 fused_ordering(466) 00:10:53.993 fused_ordering(467) 00:10:53.993 fused_ordering(468) 00:10:53.993 fused_ordering(469) 00:10:53.993 fused_ordering(470) 00:10:53.993 fused_ordering(471) 00:10:53.993 fused_ordering(472) 00:10:53.993 fused_ordering(473) 00:10:53.993 fused_ordering(474) 00:10:53.993 fused_ordering(475) 00:10:53.993 fused_ordering(476) 00:10:53.993 fused_ordering(477) 00:10:53.993 fused_ordering(478) 00:10:53.993 fused_ordering(479) 00:10:53.993 fused_ordering(480) 00:10:53.993 fused_ordering(481) 00:10:53.993 fused_ordering(482) 00:10:53.993 fused_ordering(483) 00:10:53.993 fused_ordering(484) 00:10:53.993 fused_ordering(485) 00:10:53.993 fused_ordering(486) 00:10:53.993 fused_ordering(487) 00:10:53.993 fused_ordering(488) 00:10:53.993 fused_ordering(489) 00:10:53.993 fused_ordering(490) 00:10:53.993 fused_ordering(491) 00:10:53.993 fused_ordering(492) 00:10:53.993 fused_ordering(493) 00:10:53.993 fused_ordering(494) 00:10:53.993 fused_ordering(495) 00:10:53.993 fused_ordering(496) 00:10:53.993 fused_ordering(497) 00:10:53.993 fused_ordering(498) 00:10:53.993 fused_ordering(499) 00:10:53.993 fused_ordering(500) 00:10:53.993 fused_ordering(501) 00:10:53.993 fused_ordering(502) 00:10:53.993 fused_ordering(503) 00:10:53.993 fused_ordering(504) 00:10:53.993 fused_ordering(505) 00:10:53.993 fused_ordering(506) 00:10:53.993 fused_ordering(507) 00:10:53.993 fused_ordering(508) 00:10:53.993 fused_ordering(509) 00:10:53.993 fused_ordering(510) 00:10:53.993 fused_ordering(511) 00:10:53.993 fused_ordering(512) 00:10:53.993 fused_ordering(513) 00:10:53.993 fused_ordering(514) 00:10:53.993 fused_ordering(515) 00:10:53.993 fused_ordering(516) 00:10:53.993 fused_ordering(517) 00:10:53.993 fused_ordering(518) 00:10:53.993 fused_ordering(519) 00:10:53.993 fused_ordering(520) 00:10:53.993 fused_ordering(521) 00:10:53.993 fused_ordering(522) 00:10:53.993 fused_ordering(523) 00:10:53.993 fused_ordering(524) 00:10:53.993 fused_ordering(525) 00:10:53.993 fused_ordering(526) 00:10:53.993 fused_ordering(527) 00:10:53.993 fused_ordering(528) 00:10:53.993 fused_ordering(529) 00:10:53.993 fused_ordering(530) 00:10:53.993 fused_ordering(531) 00:10:53.993 fused_ordering(532) 00:10:53.993 fused_ordering(533) 00:10:53.993 fused_ordering(534) 00:10:53.993 fused_ordering(535) 00:10:53.993 fused_ordering(536) 00:10:53.993 fused_ordering(537) 00:10:53.993 fused_ordering(538) 00:10:53.993 fused_ordering(539) 00:10:53.993 fused_ordering(540) 00:10:53.993 fused_ordering(541) 00:10:53.993 fused_ordering(542) 00:10:53.993 fused_ordering(543) 00:10:53.993 fused_ordering(544) 00:10:53.993 fused_ordering(545) 00:10:53.993 fused_ordering(546) 00:10:53.993 fused_ordering(547) 00:10:53.993 fused_ordering(548) 00:10:53.993 fused_ordering(549) 00:10:53.993 fused_ordering(550) 00:10:53.993 fused_ordering(551) 00:10:53.993 fused_ordering(552) 00:10:53.993 fused_ordering(553) 00:10:53.993 fused_ordering(554) 00:10:53.993 fused_ordering(555) 00:10:53.993 fused_ordering(556) 00:10:53.993 fused_ordering(557) 00:10:53.993 fused_ordering(558) 00:10:53.993 fused_ordering(559) 00:10:53.993 fused_ordering(560) 00:10:53.993 fused_ordering(561) 00:10:53.993 fused_ordering(562) 00:10:53.993 fused_ordering(563) 00:10:53.993 fused_ordering(564) 00:10:53.993 fused_ordering(565) 00:10:53.993 fused_ordering(566) 00:10:53.993 fused_ordering(567) 00:10:53.993 fused_ordering(568) 00:10:53.993 fused_ordering(569) 00:10:53.993 fused_ordering(570) 00:10:53.993 fused_ordering(571) 00:10:53.993 fused_ordering(572) 00:10:53.993 fused_ordering(573) 00:10:53.993 fused_ordering(574) 00:10:53.993 fused_ordering(575) 00:10:53.993 fused_ordering(576) 00:10:53.993 fused_ordering(577) 00:10:53.993 fused_ordering(578) 00:10:53.993 fused_ordering(579) 00:10:53.993 fused_ordering(580) 00:10:53.993 fused_ordering(581) 00:10:53.993 fused_ordering(582) 00:10:53.993 fused_ordering(583) 00:10:53.993 fused_ordering(584) 00:10:53.993 fused_ordering(585) 00:10:53.993 fused_ordering(586) 00:10:53.993 fused_ordering(587) 00:10:53.993 fused_ordering(588) 00:10:53.993 fused_ordering(589) 00:10:53.993 fused_ordering(590) 00:10:53.993 fused_ordering(591) 00:10:53.993 fused_ordering(592) 00:10:53.993 fused_ordering(593) 00:10:53.993 fused_ordering(594) 00:10:53.993 fused_ordering(595) 00:10:53.993 fused_ordering(596) 00:10:53.993 fused_ordering(597) 00:10:53.993 fused_ordering(598) 00:10:53.993 fused_ordering(599) 00:10:53.993 fused_ordering(600) 00:10:53.994 fused_ordering(601) 00:10:53.994 fused_ordering(602) 00:10:53.994 fused_ordering(603) 00:10:53.994 fused_ordering(604) 00:10:53.994 fused_ordering(605) 00:10:53.994 fused_ordering(606) 00:10:53.994 fused_ordering(607) 00:10:53.994 fused_ordering(608) 00:10:53.994 fused_ordering(609) 00:10:53.994 fused_ordering(610) 00:10:53.994 fused_ordering(611) 00:10:53.994 fused_ordering(612) 00:10:53.994 fused_ordering(613) 00:10:53.994 fused_ordering(614) 00:10:53.994 fused_ordering(615) 00:10:54.924 fused_ordering(616) 00:10:54.924 fused_ordering(617) 00:10:54.924 fused_ordering(618) 00:10:54.924 fused_ordering(619) 00:10:54.924 fused_ordering(620) 00:10:54.924 fused_ordering(621) 00:10:54.924 fused_ordering(622) 00:10:54.924 fused_ordering(623) 00:10:54.924 fused_ordering(624) 00:10:54.924 fused_ordering(625) 00:10:54.924 fused_ordering(626) 00:10:54.924 fused_ordering(627) 00:10:54.924 fused_ordering(628) 00:10:54.924 fused_ordering(629) 00:10:54.924 fused_ordering(630) 00:10:54.924 fused_ordering(631) 00:10:54.924 fused_ordering(632) 00:10:54.924 fused_ordering(633) 00:10:54.924 fused_ordering(634) 00:10:54.924 fused_ordering(635) 00:10:54.924 fused_ordering(636) 00:10:54.924 fused_ordering(637) 00:10:54.924 fused_ordering(638) 00:10:54.924 fused_ordering(639) 00:10:54.924 fused_ordering(640) 00:10:54.924 fused_ordering(641) 00:10:54.924 fused_ordering(642) 00:10:54.924 fused_ordering(643) 00:10:54.924 fused_ordering(644) 00:10:54.924 fused_ordering(645) 00:10:54.924 fused_ordering(646) 00:10:54.924 fused_ordering(647) 00:10:54.924 fused_ordering(648) 00:10:54.924 fused_ordering(649) 00:10:54.924 fused_ordering(650) 00:10:54.924 fused_ordering(651) 00:10:54.924 fused_ordering(652) 00:10:54.924 fused_ordering(653) 00:10:54.924 fused_ordering(654) 00:10:54.924 fused_ordering(655) 00:10:54.924 fused_ordering(656) 00:10:54.924 fused_ordering(657) 00:10:54.924 fused_ordering(658) 00:10:54.924 fused_ordering(659) 00:10:54.924 fused_ordering(660) 00:10:54.924 fused_ordering(661) 00:10:54.924 fused_ordering(662) 00:10:54.924 fused_ordering(663) 00:10:54.924 fused_ordering(664) 00:10:54.924 fused_ordering(665) 00:10:54.924 fused_ordering(666) 00:10:54.924 fused_ordering(667) 00:10:54.924 fused_ordering(668) 00:10:54.924 fused_ordering(669) 00:10:54.924 fused_ordering(670) 00:10:54.924 fused_ordering(671) 00:10:54.924 fused_ordering(672) 00:10:54.924 fused_ordering(673) 00:10:54.924 fused_ordering(674) 00:10:54.924 fused_ordering(675) 00:10:54.924 fused_ordering(676) 00:10:54.924 fused_ordering(677) 00:10:54.924 fused_ordering(678) 00:10:54.924 fused_ordering(679) 00:10:54.924 fused_ordering(680) 00:10:54.924 fused_ordering(681) 00:10:54.924 fused_ordering(682) 00:10:54.924 fused_ordering(683) 00:10:54.924 fused_ordering(684) 00:10:54.924 fused_ordering(685) 00:10:54.924 fused_ordering(686) 00:10:54.924 fused_ordering(687) 00:10:54.924 fused_ordering(688) 00:10:54.924 fused_ordering(689) 00:10:54.924 fused_ordering(690) 00:10:54.924 fused_ordering(691) 00:10:54.924 fused_ordering(692) 00:10:54.924 fused_ordering(693) 00:10:54.924 fused_ordering(694) 00:10:54.924 fused_ordering(695) 00:10:54.924 fused_ordering(696) 00:10:54.924 fused_ordering(697) 00:10:54.924 fused_ordering(698) 00:10:54.924 fused_ordering(699) 00:10:54.924 fused_ordering(700) 00:10:54.924 fused_ordering(701) 00:10:54.924 fused_ordering(702) 00:10:54.924 fused_ordering(703) 00:10:54.924 fused_ordering(704) 00:10:54.924 fused_ordering(705) 00:10:54.924 fused_ordering(706) 00:10:54.924 fused_ordering(707) 00:10:54.924 fused_ordering(708) 00:10:54.924 fused_ordering(709) 00:10:54.924 fused_ordering(710) 00:10:54.924 fused_ordering(711) 00:10:54.924 fused_ordering(712) 00:10:54.924 fused_ordering(713) 00:10:54.924 fused_ordering(714) 00:10:54.924 fused_ordering(715) 00:10:54.924 fused_ordering(716) 00:10:54.924 fused_ordering(717) 00:10:54.924 fused_ordering(718) 00:10:54.924 fused_ordering(719) 00:10:54.924 fused_ordering(720) 00:10:54.924 fused_ordering(721) 00:10:54.924 fused_ordering(722) 00:10:54.924 fused_ordering(723) 00:10:54.924 fused_ordering(724) 00:10:54.924 fused_ordering(725) 00:10:54.924 fused_ordering(726) 00:10:54.924 fused_ordering(727) 00:10:54.924 fused_ordering(728) 00:10:54.924 fused_ordering(729) 00:10:54.924 fused_ordering(730) 00:10:54.924 fused_ordering(731) 00:10:54.924 fused_ordering(732) 00:10:54.924 fused_ordering(733) 00:10:54.924 fused_ordering(734) 00:10:54.924 fused_ordering(735) 00:10:54.924 fused_ordering(736) 00:10:54.924 fused_ordering(737) 00:10:54.924 fused_ordering(738) 00:10:54.924 fused_ordering(739) 00:10:54.924 fused_ordering(740) 00:10:54.924 fused_ordering(741) 00:10:54.924 fused_ordering(742) 00:10:54.924 fused_ordering(743) 00:10:54.924 fused_ordering(744) 00:10:54.924 fused_ordering(745) 00:10:54.924 fused_ordering(746) 00:10:54.924 fused_ordering(747) 00:10:54.924 fused_ordering(748) 00:10:54.924 fused_ordering(749) 00:10:54.924 fused_ordering(750) 00:10:54.924 fused_ordering(751) 00:10:54.924 fused_ordering(752) 00:10:54.924 fused_ordering(753) 00:10:54.924 fused_ordering(754) 00:10:54.924 fused_ordering(755) 00:10:54.924 fused_ordering(756) 00:10:54.924 fused_ordering(757) 00:10:54.924 fused_ordering(758) 00:10:54.924 fused_ordering(759) 00:10:54.924 fused_ordering(760) 00:10:54.924 fused_ordering(761) 00:10:54.924 fused_ordering(762) 00:10:54.924 fused_ordering(763) 00:10:54.924 fused_ordering(764) 00:10:54.924 fused_ordering(765) 00:10:54.924 fused_ordering(766) 00:10:54.924 fused_ordering(767) 00:10:54.924 fused_ordering(768) 00:10:54.924 fused_ordering(769) 00:10:54.924 fused_ordering(770) 00:10:54.924 fused_ordering(771) 00:10:54.924 fused_ordering(772) 00:10:54.924 fused_ordering(773) 00:10:54.924 fused_ordering(774) 00:10:54.924 fused_ordering(775) 00:10:54.924 fused_ordering(776) 00:10:54.924 fused_ordering(777) 00:10:54.924 fused_ordering(778) 00:10:54.924 fused_ordering(779) 00:10:54.924 fused_ordering(780) 00:10:54.924 fused_ordering(781) 00:10:54.924 fused_ordering(782) 00:10:54.924 fused_ordering(783) 00:10:54.924 fused_ordering(784) 00:10:54.924 fused_ordering(785) 00:10:54.924 fused_ordering(786) 00:10:54.924 fused_ordering(787) 00:10:54.924 fused_ordering(788) 00:10:54.924 fused_ordering(789) 00:10:54.924 fused_ordering(790) 00:10:54.924 fused_ordering(791) 00:10:54.924 fused_ordering(792) 00:10:54.924 fused_ordering(793) 00:10:54.924 fused_ordering(794) 00:10:54.924 fused_ordering(795) 00:10:54.924 fused_ordering(796) 00:10:54.924 fused_ordering(797) 00:10:54.924 fused_ordering(798) 00:10:54.924 fused_ordering(799) 00:10:54.924 fused_ordering(800) 00:10:54.924 fused_ordering(801) 00:10:54.924 fused_ordering(802) 00:10:54.924 fused_ordering(803) 00:10:54.924 fused_ordering(804) 00:10:54.924 fused_ordering(805) 00:10:54.924 fused_ordering(806) 00:10:54.924 fused_ordering(807) 00:10:54.924 fused_ordering(808) 00:10:54.924 fused_ordering(809) 00:10:54.924 fused_ordering(810) 00:10:54.924 fused_ordering(811) 00:10:54.924 fused_ordering(812) 00:10:54.924 fused_ordering(813) 00:10:54.924 fused_ordering(814) 00:10:54.924 fused_ordering(815) 00:10:54.924 fused_ordering(816) 00:10:54.924 fused_ordering(817) 00:10:54.924 fused_ordering(818) 00:10:54.924 fused_ordering(819) 00:10:54.924 fused_ordering(820) 00:10:55.856 fused_ordering(821) 00:10:55.856 fused_ordering(822) 00:10:55.856 fused_ordering(823) 00:10:55.856 fused_ordering(824) 00:10:55.856 fused_ordering(825) 00:10:55.856 fused_ordering(826) 00:10:55.856 fused_ordering(827) 00:10:55.856 fused_ordering(828) 00:10:55.856 fused_ordering(829) 00:10:55.856 fused_ordering(830) 00:10:55.856 fused_ordering(831) 00:10:55.856 fused_ordering(832) 00:10:55.856 fused_ordering(833) 00:10:55.856 fused_ordering(834) 00:10:55.856 fused_ordering(835) 00:10:55.856 fused_ordering(836) 00:10:55.856 fused_ordering(837) 00:10:55.856 fused_ordering(838) 00:10:55.856 fused_ordering(839) 00:10:55.856 fused_ordering(840) 00:10:55.856 fused_ordering(841) 00:10:55.856 fused_ordering(842) 00:10:55.856 fused_ordering(843) 00:10:55.856 fused_ordering(844) 00:10:55.856 fused_ordering(845) 00:10:55.856 fused_ordering(846) 00:10:55.856 fused_ordering(847) 00:10:55.856 fused_ordering(848) 00:10:55.856 fused_ordering(849) 00:10:55.856 fused_ordering(850) 00:10:55.856 fused_ordering(851) 00:10:55.856 fused_ordering(852) 00:10:55.856 fused_ordering(853) 00:10:55.856 fused_ordering(854) 00:10:55.856 fused_ordering(855) 00:10:55.856 fused_ordering(856) 00:10:55.856 fused_ordering(857) 00:10:55.856 fused_ordering(858) 00:10:55.856 fused_ordering(859) 00:10:55.856 fused_ordering(860) 00:10:55.856 fused_ordering(861) 00:10:55.856 fused_ordering(862) 00:10:55.856 fused_ordering(863) 00:10:55.856 fused_ordering(864) 00:10:55.856 fused_ordering(865) 00:10:55.856 fused_ordering(866) 00:10:55.856 fused_ordering(867) 00:10:55.856 fused_ordering(868) 00:10:55.856 fused_ordering(869) 00:10:55.856 fused_ordering(870) 00:10:55.856 fused_ordering(871) 00:10:55.856 fused_ordering(872) 00:10:55.856 fused_ordering(873) 00:10:55.856 fused_ordering(874) 00:10:55.856 fused_ordering(875) 00:10:55.856 fused_ordering(876) 00:10:55.856 fused_ordering(877) 00:10:55.856 fused_ordering(878) 00:10:55.856 fused_ordering(879) 00:10:55.856 fused_ordering(880) 00:10:55.856 fused_ordering(881) 00:10:55.856 fused_ordering(882) 00:10:55.856 fused_ordering(883) 00:10:55.856 fused_ordering(884) 00:10:55.856 fused_ordering(885) 00:10:55.856 fused_ordering(886) 00:10:55.856 fused_ordering(887) 00:10:55.856 fused_ordering(888) 00:10:55.856 fused_ordering(889) 00:10:55.856 fused_ordering(890) 00:10:55.856 fused_ordering(891) 00:10:55.856 fused_ordering(892) 00:10:55.856 fused_ordering(893) 00:10:55.856 fused_ordering(894) 00:10:55.856 fused_ordering(895) 00:10:55.856 fused_ordering(896) 00:10:55.856 fused_ordering(897) 00:10:55.856 fused_ordering(898) 00:10:55.856 fused_ordering(899) 00:10:55.856 fused_ordering(900) 00:10:55.856 fused_ordering(901) 00:10:55.856 fused_ordering(902) 00:10:55.856 fused_ordering(903) 00:10:55.856 fused_ordering(904) 00:10:55.856 fused_ordering(905) 00:10:55.856 fused_ordering(906) 00:10:55.856 fused_ordering(907) 00:10:55.856 fused_ordering(908) 00:10:55.856 fused_ordering(909) 00:10:55.856 fused_ordering(910) 00:10:55.856 fused_ordering(911) 00:10:55.856 fused_ordering(912) 00:10:55.856 fused_ordering(913) 00:10:55.856 fused_ordering(914) 00:10:55.856 fused_ordering(915) 00:10:55.856 fused_ordering(916) 00:10:55.856 fused_ordering(917) 00:10:55.856 fused_ordering(918) 00:10:55.856 fused_ordering(919) 00:10:55.856 fused_ordering(920) 00:10:55.856 fused_ordering(921) 00:10:55.856 fused_ordering(922) 00:10:55.856 fused_ordering(923) 00:10:55.856 fused_ordering(924) 00:10:55.856 fused_ordering(925) 00:10:55.856 fused_ordering(926) 00:10:55.856 fused_ordering(927) 00:10:55.856 fused_ordering(928) 00:10:55.856 fused_ordering(929) 00:10:55.856 fused_ordering(930) 00:10:55.856 fused_ordering(931) 00:10:55.856 fused_ordering(932) 00:10:55.856 fused_ordering(933) 00:10:55.856 fused_ordering(934) 00:10:55.856 fused_ordering(935) 00:10:55.856 fused_ordering(936) 00:10:55.856 fused_ordering(937) 00:10:55.856 fused_ordering(938) 00:10:55.856 fused_ordering(939) 00:10:55.856 fused_ordering(940) 00:10:55.856 fused_ordering(941) 00:10:55.856 fused_ordering(942) 00:10:55.856 fused_ordering(943) 00:10:55.857 fused_ordering(944) 00:10:55.857 fused_ordering(945) 00:10:55.857 fused_ordering(946) 00:10:55.857 fused_ordering(947) 00:10:55.857 fused_ordering(948) 00:10:55.857 fused_ordering(949) 00:10:55.857 fused_ordering(950) 00:10:55.857 fused_ordering(951) 00:10:55.857 fused_ordering(952) 00:10:55.857 fused_ordering(953) 00:10:55.857 fused_ordering(954) 00:10:55.857 fused_ordering(955) 00:10:55.857 fused_ordering(956) 00:10:55.857 fused_ordering(957) 00:10:55.857 fused_ordering(958) 00:10:55.857 fused_ordering(959) 00:10:55.857 fused_ordering(960) 00:10:55.857 fused_ordering(961) 00:10:55.857 fused_ordering(962) 00:10:55.857 fused_ordering(963) 00:10:55.857 fused_ordering(964) 00:10:55.857 fused_ordering(965) 00:10:55.857 fused_ordering(966) 00:10:55.857 fused_ordering(967) 00:10:55.857 fused_ordering(968) 00:10:55.857 fused_ordering(969) 00:10:55.857 fused_ordering(970) 00:10:55.857 fused_ordering(971) 00:10:55.857 fused_ordering(972) 00:10:55.857 fused_ordering(973) 00:10:55.857 fused_ordering(974) 00:10:55.857 fused_ordering(975) 00:10:55.857 fused_ordering(976) 00:10:55.857 fused_ordering(977) 00:10:55.857 fused_ordering(978) 00:10:55.857 fused_ordering(979) 00:10:55.857 fused_ordering(980) 00:10:55.857 fused_ordering(981) 00:10:55.857 fused_ordering(982) 00:10:55.857 fused_ordering(983) 00:10:55.857 fused_ordering(984) 00:10:55.857 fused_ordering(985) 00:10:55.857 fused_ordering(986) 00:10:55.857 fused_ordering(987) 00:10:55.857 fused_ordering(988) 00:10:55.857 fused_ordering(989) 00:10:55.857 fused_ordering(990) 00:10:55.857 fused_ordering(991) 00:10:55.857 fused_ordering(992) 00:10:55.857 fused_ordering(993) 00:10:55.857 fused_ordering(994) 00:10:55.857 fused_ordering(995) 00:10:55.857 fused_ordering(996) 00:10:55.857 fused_ordering(997) 00:10:55.857 fused_ordering(998) 00:10:55.857 fused_ordering(999) 00:10:55.857 fused_ordering(1000) 00:10:55.857 fused_ordering(1001) 00:10:55.857 fused_ordering(1002) 00:10:55.857 fused_ordering(1003) 00:10:55.857 fused_ordering(1004) 00:10:55.857 fused_ordering(1005) 00:10:55.857 fused_ordering(1006) 00:10:55.857 fused_ordering(1007) 00:10:55.857 fused_ordering(1008) 00:10:55.857 fused_ordering(1009) 00:10:55.857 fused_ordering(1010) 00:10:55.857 fused_ordering(1011) 00:10:55.857 fused_ordering(1012) 00:10:55.857 fused_ordering(1013) 00:10:55.857 fused_ordering(1014) 00:10:55.857 fused_ordering(1015) 00:10:55.857 fused_ordering(1016) 00:10:55.857 fused_ordering(1017) 00:10:55.857 fused_ordering(1018) 00:10:55.857 fused_ordering(1019) 00:10:55.857 fused_ordering(1020) 00:10:55.857 fused_ordering(1021) 00:10:55.857 fused_ordering(1022) 00:10:55.857 fused_ordering(1023) 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.857 rmmod nvme_tcp 00:10:55.857 rmmod nvme_fabrics 00:10:55.857 rmmod nvme_keyring 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3972371 ']' 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3972371 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3972371 ']' 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3972371 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3972371 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3972371' 00:10:55.857 killing process with pid 3972371 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3972371 00:10:55.857 [2024-05-15 00:47:42.740194] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:55.857 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3972371 00:10:56.117 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.117 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.117 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.117 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.117 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.117 00:47:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.117 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.117 00:47:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.026 00:47:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:58.026 00:10:58.026 real 0m8.146s 00:10:58.026 user 0m6.040s 00:10:58.026 sys 0m3.827s 00:10:58.026 00:47:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:58.026 00:47:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:58.026 ************************************ 00:10:58.026 END TEST nvmf_fused_ordering 00:10:58.026 ************************************ 00:10:58.026 00:47:45 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:58.026 00:47:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:58.026 00:47:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:58.026 00:47:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.026 ************************************ 00:10:58.026 START TEST nvmf_delete_subsystem 00:10:58.026 ************************************ 00:10:58.026 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:58.285 * Looking for test storage... 00:10:58.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.285 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.286 00:47:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:00.190 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:00.190 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:00.190 Found net devices under 0000:08:00.0: cvl_0_0 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:00.190 Found net devices under 0000:08:00.1: cvl_0_1 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:11:00.190 00:11:00.190 --- 10.0.0.2 ping statistics --- 00:11:00.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.190 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:11:00.190 00:11:00.190 --- 10.0.0.1 ping statistics --- 00:11:00.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.190 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.190 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3974300 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3974300 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3974300 ']' 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:00.191 00:47:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.191 [2024-05-15 00:47:46.984579] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:00.191 [2024-05-15 00:47:46.984683] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.191 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.191 [2024-05-15 00:47:47.051220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:00.191 [2024-05-15 00:47:47.170397] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.191 [2024-05-15 00:47:47.170461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.191 [2024-05-15 00:47:47.170476] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.191 [2024-05-15 00:47:47.170489] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.191 [2024-05-15 00:47:47.170501] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.191 [2024-05-15 00:47:47.172956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.191 [2024-05-15 00:47:47.172994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.448 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:00.448 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:11:00.448 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.448 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.448 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.448 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.448 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.448 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.449 [2024-05-15 00:47:47.312988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.449 [2024-05-15 00:47:47.328949] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:00.449 [2024-05-15 00:47:47.329231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.449 NULL1 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.449 Delay0 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3974324 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:00.449 00:47:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:00.449 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.449 [2024-05-15 00:47:47.414021] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:02.346 00:47:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.346 00:47:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.346 00:47:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 [2024-05-15 00:47:49.585443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aed20 is same with the state(5) to be set 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 starting I/O failed: -6 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 [2024-05-15 00:47:49.586753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f88c8000c00 is same with the state(5) to be set 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Write completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.604 Read completed with error (sct=0, sc=8) 00:11:02.605 Write completed with error (sct=0, sc=8) 00:11:02.605 Write completed with error (sct=0, sc=8) 00:11:02.605 Write completed with error (sct=0, sc=8) 00:11:02.605 [2024-05-15 00:47:49.587264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b6780 is same with the state(5) to be set 00:11:02.605 Read completed with error (sct=0, sc=8) 00:11:02.605 Read completed with error (sct=0, sc=8) 00:11:02.605 Read completed with error (sct=0, sc=8) 00:11:02.605 Read completed with error (sct=0, sc=8) 00:11:02.605 Read completed with error (sct=0, sc=8) 00:11:02.605 Read completed with error (sct=0, sc=8) 00:11:02.605 Read completed with error (sct=0, sc=8) 00:11:02.605 Read completed with error (sct=0, sc=8) 00:11:02.605 Read completed with error (sct=0, sc=8) 00:11:02.605 Write completed with error (sct=0, sc=8) 00:11:02.605 Write completed with error (sct=0, sc=8) 00:11:02.605 Write completed with error (sct=0, sc=8) 00:11:03.573 [2024-05-15 00:47:50.552335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ae060 is same with the state(5) to be set 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 [2024-05-15 00:47:50.587700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f88c800c600 is same with the state(5) to be set 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 [2024-05-15 00:47:50.588972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b65a0 is same with the state(5) to be set 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 [2024-05-15 00:47:50.590994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b6960 is same with the state(5) to be set 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Write completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 Read completed with error (sct=0, sc=8) 00:11:03.574 [2024-05-15 00:47:50.591853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f88c800bfe0 is same with the state(5) to be set 00:11:03.574 Initializing NVMe Controllers 00:11:03.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:03.574 Controller IO queue size 128, less than required. 00:11:03.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:03.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:03.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:03.574 Initialization complete. Launching workers. 00:11:03.574 ======================================================== 00:11:03.574 Latency(us) 00:11:03.574 Device Information : IOPS MiB/s Average min max 00:11:03.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.79 0.08 910448.33 1833.64 1046122.41 00:11:03.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.31 0.08 914129.96 605.85 1013180.45 00:11:03.574 ======================================================== 00:11:03.574 Total : 325.11 0.16 912275.09 605.85 1046122.41 00:11:03.574 00:11:03.574 [2024-05-15 00:47:50.592330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ae060 (9): Bad file descriptor 00:11:03.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:03.574 00:47:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.574 00:47:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:03.574 00:47:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3974324 00:11:03.574 00:47:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3974324 00:11:04.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3974324) - No such process 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3974324 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3974324 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3974324 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.139 [2024-05-15 00:47:51.114344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3974702 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3974702 00:11:04.139 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.139 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.139 [2024-05-15 00:47:51.185543] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:04.704 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:04.704 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3974702 00:11:04.704 00:47:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:05.269 00:47:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:05.269 00:47:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3974702 00:11:05.269 00:47:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:05.834 00:47:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:05.834 00:47:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3974702 00:11:05.834 00:47:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.091 00:47:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.091 00:47:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3974702 00:11:06.091 00:47:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.656 00:47:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.656 00:47:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3974702 00:11:06.656 00:47:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.221 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.221 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3974702 00:11:07.221 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.479 Initializing NVMe Controllers 00:11:07.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:07.479 Controller IO queue size 128, less than required. 00:11:07.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:07.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:07.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:07.479 Initialization complete. Launching workers. 00:11:07.479 ======================================================== 00:11:07.479 Latency(us) 00:11:07.479 Device Information : IOPS MiB/s Average min max 00:11:07.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005248.28 1000240.61 1042403.77 00:11:07.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005000.12 1000272.25 1042280.58 00:11:07.479 ======================================================== 00:11:07.479 Total : 256.00 0.12 1005124.20 1000240.61 1042403.77 00:11:07.479 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3974702 00:11:07.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3974702) - No such process 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3974702 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.737 rmmod nvme_tcp 00:11:07.737 rmmod nvme_fabrics 00:11:07.737 rmmod nvme_keyring 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3974300 ']' 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3974300 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3974300 ']' 00:11:07.737 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3974300 00:11:07.738 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:11:07.738 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:07.738 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3974300 00:11:07.738 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:07.738 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:07.738 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3974300' 00:11:07.738 killing process with pid 3974300 00:11:07.738 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3974300 00:11:07.738 [2024-05-15 00:47:54.736659] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:07.738 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3974300 00:11:07.996 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.996 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.996 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.996 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.996 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.996 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.996 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.996 00:47:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.535 00:47:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.535 00:11:10.535 real 0m11.942s 00:11:10.535 user 0m27.740s 00:11:10.535 sys 0m2.725s 00:11:10.535 00:47:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:10.535 00:47:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:10.535 ************************************ 00:11:10.535 END TEST nvmf_delete_subsystem 00:11:10.535 ************************************ 00:11:10.535 00:47:57 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:10.535 00:47:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:10.535 00:47:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:10.535 00:47:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.535 ************************************ 00:11:10.535 START TEST nvmf_ns_masking 00:11:10.535 ************************************ 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:10.535 * Looking for test storage... 00:11:10.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.535 00:47:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=47b0399e-d59c-4e15-bfd9-f1d44214f29a 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.536 00:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:11.915 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:11.915 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:11.915 Found net devices under 0000:08:00.0: cvl_0_0 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:11.915 Found net devices under 0000:08:00.1: cvl_0_1 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:11.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:11:11.915 00:11:11.915 --- 10.0.0.2 ping statistics --- 00:11:11.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.915 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:11:11.915 00:11:11.915 --- 10.0.0.1 ping statistics --- 00:11:11.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.915 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.915 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:11.916 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:11.916 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.916 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:11.916 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:11.916 00:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:11.916 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:11.916 00:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:11.916 00:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:12.174 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3976529 00:11:12.174 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.174 00:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3976529 00:11:12.174 00:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3976529 ']' 00:11:12.174 00:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.174 00:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:12.174 00:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.174 00:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:12.174 00:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:12.174 [2024-05-15 00:47:59.026736] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:12.174 [2024-05-15 00:47:59.026826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.174 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.174 [2024-05-15 00:47:59.094237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.174 [2024-05-15 00:47:59.215563] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.174 [2024-05-15 00:47:59.215625] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.174 [2024-05-15 00:47:59.215640] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.174 [2024-05-15 00:47:59.215653] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.174 [2024-05-15 00:47:59.215665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.174 [2024-05-15 00:47:59.215745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.174 [2024-05-15 00:47:59.215799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.174 [2024-05-15 00:47:59.215848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.174 [2024-05-15 00:47:59.215851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.432 00:47:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:12.432 00:47:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:11:12.432 00:47:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.432 00:47:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:12.432 00:47:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:12.432 00:47:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.432 00:47:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:12.690 [2024-05-15 00:47:59.638493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.690 00:47:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:12.690 00:47:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:12.690 00:47:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:12.948 Malloc1 00:11:12.948 00:47:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:13.514 Malloc2 00:11:13.514 00:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.775 00:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:14.033 00:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.290 [2024-05-15 00:48:01.144522] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:14.290 [2024-05-15 00:48:01.144820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.290 00:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:14.290 00:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47b0399e-d59c-4e15-bfd9-f1d44214f29a -a 10.0.0.2 -s 4420 -i 4 00:11:14.290 00:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:14.290 00:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:14.290 00:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.290 00:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:14.290 00:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:16.817 [ 0]:0x1 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0aae308b818b40aca65dd51fb679a540 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0aae308b818b40aca65dd51fb679a540 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:16.817 [ 0]:0x1 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0aae308b818b40aca65dd51fb679a540 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0aae308b818b40aca65dd51fb679a540 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:16.817 [ 1]:0x2 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:16.817 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:17.075 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=dacdf813d745406dabfc846dbccff900 00:11:17.075 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ dacdf813d745406dabfc846dbccff900 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.075 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:17.075 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.075 00:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.333 00:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:17.590 00:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:17.590 00:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47b0399e-d59c-4e15-bfd9-f1d44214f29a -a 10.0.0.2 -s 4420 -i 4 00:11:17.847 00:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:17.847 00:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:17.847 00:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.847 00:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:11:17.847 00:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:11:17.847 00:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:19.744 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:20.002 [ 0]:0x2 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=dacdf813d745406dabfc846dbccff900 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ dacdf813d745406dabfc846dbccff900 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.002 00:48:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:20.260 [ 0]:0x1 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0aae308b818b40aca65dd51fb679a540 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0aae308b818b40aca65dd51fb679a540 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:20.260 [ 1]:0x2 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=dacdf813d745406dabfc846dbccff900 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ dacdf813d745406dabfc846dbccff900 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.260 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:20.825 [ 0]:0x2 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=dacdf813d745406dabfc846dbccff900 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ dacdf813d745406dabfc846dbccff900 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.825 00:48:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.390 00:48:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:21.390 00:48:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 47b0399e-d59c-4e15-bfd9-f1d44214f29a -a 10.0.0.2 -s 4420 -i 4 00:11:21.390 00:48:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:21.390 00:48:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:21.390 00:48:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.390 00:48:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:21.390 00:48:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:21.390 00:48:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:23.287 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:23.287 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:23.287 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:23.545 [ 0]:0x1 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0aae308b818b40aca65dd51fb679a540 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0aae308b818b40aca65dd51fb679a540 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:23.545 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:23.803 [ 1]:0x2 00:11:23.803 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:23.803 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:23.803 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=dacdf813d745406dabfc846dbccff900 00:11:23.803 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ dacdf813d745406dabfc846dbccff900 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.803 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:24.061 00:48:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:24.061 [ 0]:0x2 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=dacdf813d745406dabfc846dbccff900 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ dacdf813d745406dabfc846dbccff900 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:24.061 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:24.319 [2024-05-15 00:48:11.337325] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:24.319 request: 00:11:24.319 { 00:11:24.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.319 "nsid": 2, 00:11:24.319 "host": "nqn.2016-06.io.spdk:host1", 00:11:24.319 "method": "nvmf_ns_remove_host", 00:11:24.319 "req_id": 1 00:11:24.319 } 00:11:24.319 Got JSON-RPC error response 00:11:24.319 response: 00:11:24.319 { 00:11:24.319 "code": -32602, 00:11:24.319 "message": "Invalid parameters" 00:11:24.319 } 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:24.319 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:24.576 [ 0]:0x2 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=dacdf813d745406dabfc846dbccff900 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ dacdf813d745406dabfc846dbccff900 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.576 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.898 rmmod nvme_tcp 00:11:24.898 rmmod nvme_fabrics 00:11:24.898 rmmod nvme_keyring 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3976529 ']' 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3976529 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3976529 ']' 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3976529 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:24.898 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3976529 00:11:25.157 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:25.157 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:25.157 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3976529' 00:11:25.157 killing process with pid 3976529 00:11:25.157 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3976529 00:11:25.157 [2024-05-15 00:48:11.897349] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:25.157 00:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3976529 00:11:25.157 00:48:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.157 00:48:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.157 00:48:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.158 00:48:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.158 00:48:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.158 00:48:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.158 00:48:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.158 00:48:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.739 00:48:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.739 00:11:27.739 real 0m17.129s 00:11:27.739 user 0m55.233s 00:11:27.739 sys 0m3.591s 00:11:27.739 00:48:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.739 00:48:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:27.739 ************************************ 00:11:27.739 END TEST nvmf_ns_masking 00:11:27.739 ************************************ 00:11:27.739 00:48:14 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:27.739 00:48:14 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:27.739 00:48:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:27.739 00:48:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.739 00:48:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.739 ************************************ 00:11:27.739 START TEST nvmf_nvme_cli 00:11:27.739 ************************************ 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:27.739 * Looking for test storage... 00:11:27.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.739 00:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:29.115 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:29.115 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:29.115 Found net devices under 0000:08:00.0: cvl_0_0 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:29.115 Found net devices under 0000:08:00.1: cvl_0_1 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.115 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:11:29.116 00:11:29.116 --- 10.0.0.2 ping statistics --- 00:11:29.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.116 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:11:29.116 00:11:29.116 --- 10.0.0.1 ping statistics --- 00:11:29.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.116 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.116 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3979936 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3979936 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3979936 ']' 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:29.374 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.374 [2024-05-15 00:48:16.232346] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:29.374 [2024-05-15 00:48:16.232444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.374 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.374 [2024-05-15 00:48:16.299687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.374 [2024-05-15 00:48:16.420040] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.374 [2024-05-15 00:48:16.420106] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.374 [2024-05-15 00:48:16.420122] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.374 [2024-05-15 00:48:16.420143] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.374 [2024-05-15 00:48:16.420155] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.374 [2024-05-15 00:48:16.420244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.374 [2024-05-15 00:48:16.420294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.374 [2024-05-15 00:48:16.420345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.374 [2024-05-15 00:48:16.420348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.632 [2024-05-15 00:48:16.566528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.632 Malloc0 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.632 Malloc1 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.632 [2024-05-15 00:48:16.645229] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:29.632 [2024-05-15 00:48:16.645515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.632 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:11:29.890 00:11:29.890 Discovery Log Number of Records 2, Generation counter 2 00:11:29.890 =====Discovery Log Entry 0====== 00:11:29.890 trtype: tcp 00:11:29.890 adrfam: ipv4 00:11:29.890 subtype: current discovery subsystem 00:11:29.890 treq: not required 00:11:29.890 portid: 0 00:11:29.890 trsvcid: 4420 00:11:29.890 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:29.890 traddr: 10.0.0.2 00:11:29.890 eflags: explicit discovery connections, duplicate discovery information 00:11:29.890 sectype: none 00:11:29.890 =====Discovery Log Entry 1====== 00:11:29.890 trtype: tcp 00:11:29.890 adrfam: ipv4 00:11:29.890 subtype: nvme subsystem 00:11:29.890 treq: not required 00:11:29.890 portid: 0 00:11:29.890 trsvcid: 4420 00:11:29.890 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:29.890 traddr: 10.0.0.2 00:11:29.890 eflags: none 00:11:29.890 sectype: none 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:29.890 00:48:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.149 00:48:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:30.149 00:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:11:30.149 00:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.149 00:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:30.149 00:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:30.149 00:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:32.676 /dev/nvme0n1 ]] 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:32.676 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.934 rmmod nvme_tcp 00:11:32.934 rmmod nvme_fabrics 00:11:32.934 rmmod nvme_keyring 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3979936 ']' 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3979936 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3979936 ']' 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3979936 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3979936 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3979936' 00:11:32.934 killing process with pid 3979936 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3979936 00:11:32.934 [2024-05-15 00:48:19.856862] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:32.934 00:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3979936 00:11:33.193 00:48:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:33.193 00:48:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:33.193 00:48:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:33.193 00:48:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.193 00:48:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.193 00:48:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.193 00:48:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.193 00:48:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.103 00:48:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:35.103 00:11:35.103 real 0m7.896s 00:11:35.103 user 0m14.919s 00:11:35.103 sys 0m1.982s 00:11:35.103 00:48:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:35.103 00:48:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.103 ************************************ 00:11:35.103 END TEST nvmf_nvme_cli 00:11:35.103 ************************************ 00:11:35.362 00:48:22 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:35.362 00:48:22 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:35.362 00:48:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:35.362 00:48:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:35.362 00:48:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:35.362 ************************************ 00:11:35.362 START TEST nvmf_vfio_user 00:11:35.362 ************************************ 00:11:35.362 00:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:35.362 * Looking for test storage... 00:11:35.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.362 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.362 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:35.362 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.362 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.362 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.362 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3980664 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3980664' 00:11:35.363 Process pid: 3980664 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3980664 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3980664 ']' 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:35.363 00:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:35.363 [2024-05-15 00:48:22.328191] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:35.363 [2024-05-15 00:48:22.328280] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.363 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.363 [2024-05-15 00:48:22.387533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.622 [2024-05-15 00:48:22.504495] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.622 [2024-05-15 00:48:22.504553] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.622 [2024-05-15 00:48:22.504569] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.622 [2024-05-15 00:48:22.504582] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.622 [2024-05-15 00:48:22.504594] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.622 [2024-05-15 00:48:22.504672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.622 [2024-05-15 00:48:22.504723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.622 [2024-05-15 00:48:22.504774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.622 [2024-05-15 00:48:22.504778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.622 00:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:35.622 00:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:11:35.622 00:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:36.997 00:48:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:36.997 00:48:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:36.997 00:48:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:36.997 00:48:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:36.997 00:48:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:36.997 00:48:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:37.255 Malloc1 00:11:37.255 00:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:37.513 00:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:38.079 00:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:38.079 [2024-05-15 00:48:25.114089] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:38.079 00:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:38.080 00:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:38.337 00:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:38.595 Malloc2 00:11:38.595 00:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:38.853 00:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:39.110 00:48:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:39.368 00:48:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:39.368 00:48:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:39.368 00:48:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:39.368 00:48:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:39.368 00:48:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:39.369 00:48:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:39.369 [2024-05-15 00:48:26.352133] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:11:39.369 [2024-05-15 00:48:26.352185] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981001 ] 00:11:39.369 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.369 [2024-05-15 00:48:26.395778] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:39.369 [2024-05-15 00:48:26.398357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:39.369 [2024-05-15 00:48:26.398388] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f38c45a4000 00:11:39.369 [2024-05-15 00:48:26.399347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:39.369 [2024-05-15 00:48:26.400343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:39.369 [2024-05-15 00:48:26.401347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:39.369 [2024-05-15 00:48:26.402353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:39.369 [2024-05-15 00:48:26.403362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:39.369 [2024-05-15 00:48:26.404370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:39.369 [2024-05-15 00:48:26.405372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:39.369 [2024-05-15 00:48:26.406374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:39.369 [2024-05-15 00:48:26.407384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:39.369 [2024-05-15 00:48:26.407409] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f38c4599000 00:11:39.369 [2024-05-15 00:48:26.408867] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:39.629 [2024-05-15 00:48:26.429442] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:39.629 [2024-05-15 00:48:26.429483] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:39.629 [2024-05-15 00:48:26.432522] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:39.629 [2024-05-15 00:48:26.432586] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:39.629 [2024-05-15 00:48:26.432697] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:39.629 [2024-05-15 00:48:26.432727] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:39.629 [2024-05-15 00:48:26.432739] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:39.629 [2024-05-15 00:48:26.433516] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:39.629 [2024-05-15 00:48:26.433538] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:39.629 [2024-05-15 00:48:26.433552] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:39.629 [2024-05-15 00:48:26.434520] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:39.629 [2024-05-15 00:48:26.434548] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:39.629 [2024-05-15 00:48:26.434564] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:39.629 [2024-05-15 00:48:26.435525] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:39.629 [2024-05-15 00:48:26.435545] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:39.629 [2024-05-15 00:48:26.436530] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:39.629 [2024-05-15 00:48:26.436551] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:39.629 [2024-05-15 00:48:26.436561] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:39.629 [2024-05-15 00:48:26.436574] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:39.629 [2024-05-15 00:48:26.436687] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:39.630 [2024-05-15 00:48:26.436696] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:39.630 [2024-05-15 00:48:26.436706] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:39.630 [2024-05-15 00:48:26.437536] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:39.630 [2024-05-15 00:48:26.438541] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:39.630 [2024-05-15 00:48:26.439551] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:39.630 [2024-05-15 00:48:26.440545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:39.630 [2024-05-15 00:48:26.440647] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:39.630 [2024-05-15 00:48:26.441564] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:39.630 [2024-05-15 00:48:26.441584] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:39.630 [2024-05-15 00:48:26.441595] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.441623] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:39.630 [2024-05-15 00:48:26.441644] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.441675] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:39.630 [2024-05-15 00:48:26.441686] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:39.630 [2024-05-15 00:48:26.441710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.441773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.441797] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:39.630 [2024-05-15 00:48:26.441807] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:39.630 [2024-05-15 00:48:26.441817] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:39.630 [2024-05-15 00:48:26.441825] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:39.630 [2024-05-15 00:48:26.441834] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:39.630 [2024-05-15 00:48:26.441843] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:39.630 [2024-05-15 00:48:26.441852] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.441871] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.441895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.441920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.441947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.630 [2024-05-15 00:48:26.441963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.630 [2024-05-15 00:48:26.441977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.630 [2024-05-15 00:48:26.441994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.630 [2024-05-15 00:48:26.442004] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442021] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.442075] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:39.630 [2024-05-15 00:48:26.442085] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442097] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442112] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.442203] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442225] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442241] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:39.630 [2024-05-15 00:48:26.442251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:39.630 [2024-05-15 00:48:26.442262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.442304] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:39.630 [2024-05-15 00:48:26.442322] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442339] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442353] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:39.630 [2024-05-15 00:48:26.442362] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:39.630 [2024-05-15 00:48:26.442373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.442421] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442437] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442450] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:39.630 [2024-05-15 00:48:26.442459] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:39.630 [2024-05-15 00:48:26.442470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.442504] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442519] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442543] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442561] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442571] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442580] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:39.630 [2024-05-15 00:48:26.442589] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:39.630 [2024-05-15 00:48:26.442602] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:39.630 [2024-05-15 00:48:26.442637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.442680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.442712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.442742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:39.630 [2024-05-15 00:48:26.442775] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:39.630 [2024-05-15 00:48:26.442785] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:39.630 [2024-05-15 00:48:26.442792] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:39.630 [2024-05-15 00:48:26.442799] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:39.630 [2024-05-15 00:48:26.442810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:39.630 [2024-05-15 00:48:26.442823] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:39.630 [2024-05-15 00:48:26.442832] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:39.630 [2024-05-15 00:48:26.442843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442855] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:39.630 [2024-05-15 00:48:26.442864] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:39.630 [2024-05-15 00:48:26.442874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442892] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:39.630 [2024-05-15 00:48:26.442902] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:39.630 [2024-05-15 00:48:26.442913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:39.630 [2024-05-15 00:48:26.442926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:39.631 [2024-05-15 00:48:26.442957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:39.631 [2024-05-15 00:48:26.442977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:39.631 [2024-05-15 00:48:26.442995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:39.631 ===================================================== 00:11:39.631 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:39.631 ===================================================== 00:11:39.631 Controller Capabilities/Features 00:11:39.631 ================================ 00:11:39.631 Vendor ID: 4e58 00:11:39.631 Subsystem Vendor ID: 4e58 00:11:39.631 Serial Number: SPDK1 00:11:39.631 Model Number: SPDK bdev Controller 00:11:39.631 Firmware Version: 24.05 00:11:39.631 Recommended Arb Burst: 6 00:11:39.631 IEEE OUI Identifier: 8d 6b 50 00:11:39.631 Multi-path I/O 00:11:39.631 May have multiple subsystem ports: Yes 00:11:39.631 May have multiple controllers: Yes 00:11:39.631 Associated with SR-IOV VF: No 00:11:39.631 Max Data Transfer Size: 131072 00:11:39.631 Max Number of Namespaces: 32 00:11:39.631 Max Number of I/O Queues: 127 00:11:39.631 NVMe Specification Version (VS): 1.3 00:11:39.631 NVMe Specification Version (Identify): 1.3 00:11:39.631 Maximum Queue Entries: 256 00:11:39.631 Contiguous Queues Required: Yes 00:11:39.631 Arbitration Mechanisms Supported 00:11:39.631 Weighted Round Robin: Not Supported 00:11:39.631 Vendor Specific: Not Supported 00:11:39.631 Reset Timeout: 15000 ms 00:11:39.631 Doorbell Stride: 4 bytes 00:11:39.631 NVM Subsystem Reset: Not Supported 00:11:39.631 Command Sets Supported 00:11:39.631 NVM Command Set: Supported 00:11:39.631 Boot Partition: Not Supported 00:11:39.631 Memory Page Size Minimum: 4096 bytes 00:11:39.631 Memory Page Size Maximum: 4096 bytes 00:11:39.631 Persistent Memory Region: Not Supported 00:11:39.631 Optional Asynchronous Events Supported 00:11:39.631 Namespace Attribute Notices: Supported 00:11:39.631 Firmware Activation Notices: Not Supported 00:11:39.631 ANA Change Notices: Not Supported 00:11:39.631 PLE Aggregate Log Change Notices: Not Supported 00:11:39.631 LBA Status Info Alert Notices: Not Supported 00:11:39.631 EGE Aggregate Log Change Notices: Not Supported 00:11:39.631 Normal NVM Subsystem Shutdown event: Not Supported 00:11:39.631 Zone Descriptor Change Notices: Not Supported 00:11:39.631 Discovery Log Change Notices: Not Supported 00:11:39.631 Controller Attributes 00:11:39.631 128-bit Host Identifier: Supported 00:11:39.631 Non-Operational Permissive Mode: Not Supported 00:11:39.631 NVM Sets: Not Supported 00:11:39.631 Read Recovery Levels: Not Supported 00:11:39.631 Endurance Groups: Not Supported 00:11:39.631 Predictable Latency Mode: Not Supported 00:11:39.631 Traffic Based Keep ALive: Not Supported 00:11:39.631 Namespace Granularity: Not Supported 00:11:39.631 SQ Associations: Not Supported 00:11:39.631 UUID List: Not Supported 00:11:39.631 Multi-Domain Subsystem: Not Supported 00:11:39.631 Fixed Capacity Management: Not Supported 00:11:39.631 Variable Capacity Management: Not Supported 00:11:39.631 Delete Endurance Group: Not Supported 00:11:39.631 Delete NVM Set: Not Supported 00:11:39.631 Extended LBA Formats Supported: Not Supported 00:11:39.631 Flexible Data Placement Supported: Not Supported 00:11:39.631 00:11:39.631 Controller Memory Buffer Support 00:11:39.631 ================================ 00:11:39.631 Supported: No 00:11:39.631 00:11:39.631 Persistent Memory Region Support 00:11:39.631 ================================ 00:11:39.631 Supported: No 00:11:39.631 00:11:39.631 Admin Command Set Attributes 00:11:39.631 ============================ 00:11:39.631 Security Send/Receive: Not Supported 00:11:39.631 Format NVM: Not Supported 00:11:39.631 Firmware Activate/Download: Not Supported 00:11:39.631 Namespace Management: Not Supported 00:11:39.631 Device Self-Test: Not Supported 00:11:39.631 Directives: Not Supported 00:11:39.631 NVMe-MI: Not Supported 00:11:39.631 Virtualization Management: Not Supported 00:11:39.631 Doorbell Buffer Config: Not Supported 00:11:39.631 Get LBA Status Capability: Not Supported 00:11:39.631 Command & Feature Lockdown Capability: Not Supported 00:11:39.631 Abort Command Limit: 4 00:11:39.631 Async Event Request Limit: 4 00:11:39.631 Number of Firmware Slots: N/A 00:11:39.631 Firmware Slot 1 Read-Only: N/A 00:11:39.631 Firmware Activation Without Reset: N/A 00:11:39.631 Multiple Update Detection Support: N/A 00:11:39.631 Firmware Update Granularity: No Information Provided 00:11:39.631 Per-Namespace SMART Log: No 00:11:39.631 Asymmetric Namespace Access Log Page: Not Supported 00:11:39.631 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:39.631 Command Effects Log Page: Supported 00:11:39.631 Get Log Page Extended Data: Supported 00:11:39.631 Telemetry Log Pages: Not Supported 00:11:39.631 Persistent Event Log Pages: Not Supported 00:11:39.631 Supported Log Pages Log Page: May Support 00:11:39.631 Commands Supported & Effects Log Page: Not Supported 00:11:39.631 Feature Identifiers & Effects Log Page:May Support 00:11:39.631 NVMe-MI Commands & Effects Log Page: May Support 00:11:39.631 Data Area 4 for Telemetry Log: Not Supported 00:11:39.631 Error Log Page Entries Supported: 128 00:11:39.631 Keep Alive: Supported 00:11:39.631 Keep Alive Granularity: 10000 ms 00:11:39.631 00:11:39.631 NVM Command Set Attributes 00:11:39.631 ========================== 00:11:39.631 Submission Queue Entry Size 00:11:39.631 Max: 64 00:11:39.631 Min: 64 00:11:39.631 Completion Queue Entry Size 00:11:39.631 Max: 16 00:11:39.631 Min: 16 00:11:39.631 Number of Namespaces: 32 00:11:39.631 Compare Command: Supported 00:11:39.631 Write Uncorrectable Command: Not Supported 00:11:39.631 Dataset Management Command: Supported 00:11:39.631 Write Zeroes Command: Supported 00:11:39.631 Set Features Save Field: Not Supported 00:11:39.631 Reservations: Not Supported 00:11:39.631 Timestamp: Not Supported 00:11:39.631 Copy: Supported 00:11:39.631 Volatile Write Cache: Present 00:11:39.631 Atomic Write Unit (Normal): 1 00:11:39.631 Atomic Write Unit (PFail): 1 00:11:39.631 Atomic Compare & Write Unit: 1 00:11:39.631 Fused Compare & Write: Supported 00:11:39.631 Scatter-Gather List 00:11:39.631 SGL Command Set: Supported (Dword aligned) 00:11:39.631 SGL Keyed: Not Supported 00:11:39.631 SGL Bit Bucket Descriptor: Not Supported 00:11:39.631 SGL Metadata Pointer: Not Supported 00:11:39.631 Oversized SGL: Not Supported 00:11:39.631 SGL Metadata Address: Not Supported 00:11:39.631 SGL Offset: Not Supported 00:11:39.631 Transport SGL Data Block: Not Supported 00:11:39.631 Replay Protected Memory Block: Not Supported 00:11:39.631 00:11:39.631 Firmware Slot Information 00:11:39.631 ========================= 00:11:39.631 Active slot: 1 00:11:39.631 Slot 1 Firmware Revision: 24.05 00:11:39.631 00:11:39.631 00:11:39.631 Commands Supported and Effects 00:11:39.631 ============================== 00:11:39.631 Admin Commands 00:11:39.631 -------------- 00:11:39.631 Get Log Page (02h): Supported 00:11:39.631 Identify (06h): Supported 00:11:39.631 Abort (08h): Supported 00:11:39.631 Set Features (09h): Supported 00:11:39.631 Get Features (0Ah): Supported 00:11:39.631 Asynchronous Event Request (0Ch): Supported 00:11:39.631 Keep Alive (18h): Supported 00:11:39.631 I/O Commands 00:11:39.631 ------------ 00:11:39.631 Flush (00h): Supported LBA-Change 00:11:39.631 Write (01h): Supported LBA-Change 00:11:39.631 Read (02h): Supported 00:11:39.631 Compare (05h): Supported 00:11:39.631 Write Zeroes (08h): Supported LBA-Change 00:11:39.631 Dataset Management (09h): Supported LBA-Change 00:11:39.631 Copy (19h): Supported LBA-Change 00:11:39.631 Unknown (79h): Supported LBA-Change 00:11:39.631 Unknown (7Ah): Supported 00:11:39.631 00:11:39.631 Error Log 00:11:39.631 ========= 00:11:39.631 00:11:39.631 Arbitration 00:11:39.631 =========== 00:11:39.631 Arbitration Burst: 1 00:11:39.631 00:11:39.631 Power Management 00:11:39.631 ================ 00:11:39.631 Number of Power States: 1 00:11:39.631 Current Power State: Power State #0 00:11:39.631 Power State #0: 00:11:39.631 Max Power: 0.00 W 00:11:39.631 Non-Operational State: Operational 00:11:39.631 Entry Latency: Not Reported 00:11:39.631 Exit Latency: Not Reported 00:11:39.631 Relative Read Throughput: 0 00:11:39.631 Relative Read Latency: 0 00:11:39.631 Relative Write Throughput: 0 00:11:39.631 Relative Write Latency: 0 00:11:39.631 Idle Power: Not Reported 00:11:39.631 Active Power: Not Reported 00:11:39.631 Non-Operational Permissive Mode: Not Supported 00:11:39.631 00:11:39.631 Health Information 00:11:39.631 ================== 00:11:39.631 Critical Warnings: 00:11:39.631 Available Spare Space: OK 00:11:39.631 Temperature: OK 00:11:39.631 Device Reliability: OK 00:11:39.631 Read Only: No 00:11:39.631 Volatile Memory Backup: OK 00:11:39.631 Current Temperature: 0 Kelvin (-2[2024-05-15 00:48:26.443136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:39.632 [2024-05-15 00:48:26.443157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:39.632 [2024-05-15 00:48:26.443201] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:39.632 [2024-05-15 00:48:26.443219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.632 [2024-05-15 00:48:26.443237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.632 [2024-05-15 00:48:26.443248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.632 [2024-05-15 00:48:26.443260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.632 [2024-05-15 00:48:26.448946] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:39.632 [2024-05-15 00:48:26.448980] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:39.632 [2024-05-15 00:48:26.449598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:39.632 [2024-05-15 00:48:26.449677] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:39.632 [2024-05-15 00:48:26.449692] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:39.632 [2024-05-15 00:48:26.450614] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:39.632 [2024-05-15 00:48:26.450638] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:39.632 [2024-05-15 00:48:26.450719] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:39.632 [2024-05-15 00:48:26.452662] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:39.632 73 Celsius) 00:11:39.632 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:39.632 Available Spare: 0% 00:11:39.632 Available Spare Threshold: 0% 00:11:39.632 Life Percentage Used: 0% 00:11:39.632 Data Units Read: 0 00:11:39.632 Data Units Written: 0 00:11:39.632 Host Read Commands: 0 00:11:39.632 Host Write Commands: 0 00:11:39.632 Controller Busy Time: 0 minutes 00:11:39.632 Power Cycles: 0 00:11:39.632 Power On Hours: 0 hours 00:11:39.632 Unsafe Shutdowns: 0 00:11:39.632 Unrecoverable Media Errors: 0 00:11:39.632 Lifetime Error Log Entries: 0 00:11:39.632 Warning Temperature Time: 0 minutes 00:11:39.632 Critical Temperature Time: 0 minutes 00:11:39.632 00:11:39.632 Number of Queues 00:11:39.632 ================ 00:11:39.632 Number of I/O Submission Queues: 127 00:11:39.632 Number of I/O Completion Queues: 127 00:11:39.632 00:11:39.632 Active Namespaces 00:11:39.632 ================= 00:11:39.632 Namespace ID:1 00:11:39.632 Error Recovery Timeout: Unlimited 00:11:39.632 Command Set Identifier: NVM (00h) 00:11:39.632 Deallocate: Supported 00:11:39.632 Deallocated/Unwritten Error: Not Supported 00:11:39.632 Deallocated Read Value: Unknown 00:11:39.632 Deallocate in Write Zeroes: Not Supported 00:11:39.632 Deallocated Guard Field: 0xFFFF 00:11:39.632 Flush: Supported 00:11:39.632 Reservation: Supported 00:11:39.632 Namespace Sharing Capabilities: Multiple Controllers 00:11:39.632 Size (in LBAs): 131072 (0GiB) 00:11:39.632 Capacity (in LBAs): 131072 (0GiB) 00:11:39.632 Utilization (in LBAs): 131072 (0GiB) 00:11:39.632 NGUID: 0C792F67019A4709B1A2A6E4102CC468 00:11:39.632 UUID: 0c792f67-019a-4709-b1a2-a6e4102cc468 00:11:39.632 Thin Provisioning: Not Supported 00:11:39.632 Per-NS Atomic Units: Yes 00:11:39.632 Atomic Boundary Size (Normal): 0 00:11:39.632 Atomic Boundary Size (PFail): 0 00:11:39.632 Atomic Boundary Offset: 0 00:11:39.632 Maximum Single Source Range Length: 65535 00:11:39.632 Maximum Copy Length: 65535 00:11:39.632 Maximum Source Range Count: 1 00:11:39.632 NGUID/EUI64 Never Reused: No 00:11:39.632 Namespace Write Protected: No 00:11:39.632 Number of LBA Formats: 1 00:11:39.632 Current LBA Format: LBA Format #00 00:11:39.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:39.632 00:11:39.632 00:48:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:39.632 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.632 [2024-05-15 00:48:26.681785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:44.897 Initializing NVMe Controllers 00:11:44.897 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:44.897 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:44.897 Initialization complete. Launching workers. 00:11:44.897 ======================================================== 00:11:44.897 Latency(us) 00:11:44.897 Device Information : IOPS MiB/s Average min max 00:11:44.897 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24087.60 94.09 5318.02 1472.59 10540.25 00:11:44.897 ======================================================== 00:11:44.897 Total : 24087.60 94.09 5318.02 1472.59 10540.25 00:11:44.897 00:11:44.897 [2024-05-15 00:48:31.705511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:44.897 00:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:44.897 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.897 [2024-05-15 00:48:31.935703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:50.159 Initializing NVMe Controllers 00:11:50.159 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:50.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:50.159 Initialization complete. Launching workers. 00:11:50.159 ======================================================== 00:11:50.159 Latency(us) 00:11:50.159 Device Information : IOPS MiB/s Average min max 00:11:50.159 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16011.33 62.54 7993.56 6954.86 15261.82 00:11:50.159 ======================================================== 00:11:50.159 Total : 16011.33 62.54 7993.56 6954.86 15261.82 00:11:50.159 00:11:50.159 [2024-05-15 00:48:36.969559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:50.159 00:48:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:50.159 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.160 [2024-05-15 00:48:37.201711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:55.420 [2024-05-15 00:48:42.278219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:55.420 Initializing NVMe Controllers 00:11:55.421 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:55.421 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:55.421 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:55.421 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:55.421 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:55.421 Initialization complete. Launching workers. 00:11:55.421 Starting thread on core 2 00:11:55.421 Starting thread on core 3 00:11:55.421 Starting thread on core 1 00:11:55.421 00:48:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:55.421 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.677 [2024-05-15 00:48:42.568416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:58.960 [2024-05-15 00:48:45.639134] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:58.960 Initializing NVMe Controllers 00:11:58.960 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.960 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.960 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:58.960 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:58.960 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:58.960 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:58.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:58.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:58.960 Initialization complete. Launching workers. 00:11:58.960 Starting thread on core 1 with urgent priority queue 00:11:58.960 Starting thread on core 2 with urgent priority queue 00:11:58.960 Starting thread on core 3 with urgent priority queue 00:11:58.960 Starting thread on core 0 with urgent priority queue 00:11:58.960 SPDK bdev Controller (SPDK1 ) core 0: 2964.67 IO/s 33.73 secs/100000 ios 00:11:58.960 SPDK bdev Controller (SPDK1 ) core 1: 2944.00 IO/s 33.97 secs/100000 ios 00:11:58.960 SPDK bdev Controller (SPDK1 ) core 2: 2719.33 IO/s 36.77 secs/100000 ios 00:11:58.960 SPDK bdev Controller (SPDK1 ) core 3: 2786.67 IO/s 35.89 secs/100000 ios 00:11:58.960 ======================================================== 00:11:58.960 00:11:58.960 00:48:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:58.960 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.960 [2024-05-15 00:48:45.929501] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:58.960 Initializing NVMe Controllers 00:11:58.960 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.960 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.960 Namespace ID: 1 size: 0GB 00:11:58.960 Initialization complete. 00:11:58.960 INFO: using host memory buffer for IO 00:11:58.960 Hello world! 00:11:58.960 [2024-05-15 00:48:45.966723] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:59.217 00:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:59.217 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.217 [2024-05-15 00:48:46.248406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:00.586 Initializing NVMe Controllers 00:12:00.586 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:00.586 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:00.586 Initialization complete. Launching workers. 00:12:00.586 submit (in ns) avg, min, max = 8318.5, 4475.6, 4024309.6 00:12:00.586 complete (in ns) avg, min, max = 32073.9, 2654.8, 7007638.5 00:12:00.586 00:12:00.586 Submit histogram 00:12:00.586 ================ 00:12:00.586 Range in us Cumulative Count 00:12:00.586 4.456 - 4.480: 0.0087% ( 1) 00:12:00.586 4.480 - 4.504: 0.0953% ( 10) 00:12:00.586 4.504 - 4.527: 0.8144% ( 83) 00:12:00.586 4.527 - 4.551: 2.9718% ( 249) 00:12:00.586 4.551 - 4.575: 6.7666% ( 438) 00:12:00.586 4.575 - 4.599: 11.8957% ( 592) 00:12:00.586 4.599 - 4.622: 16.0024% ( 474) 00:12:00.586 4.622 - 4.646: 17.9518% ( 225) 00:12:00.586 4.646 - 4.670: 18.8269% ( 101) 00:12:00.586 4.670 - 4.693: 19.4420% ( 71) 00:12:00.586 4.693 - 4.717: 20.8716% ( 165) 00:12:00.586 4.717 - 4.741: 23.5661% ( 311) 00:12:00.586 4.741 - 4.764: 28.1927% ( 534) 00:12:00.586 4.764 - 4.788: 32.6200% ( 511) 00:12:00.586 4.788 - 4.812: 36.8914% ( 493) 00:12:00.586 4.812 - 4.836: 38.2083% ( 152) 00:12:00.586 4.836 - 4.859: 38.8148% ( 70) 00:12:00.586 4.859 - 4.883: 39.2220% ( 47) 00:12:00.586 4.883 - 4.907: 39.7678% ( 63) 00:12:00.586 4.907 - 4.930: 40.1663% ( 46) 00:12:00.586 4.930 - 4.954: 40.7122% ( 63) 00:12:00.586 4.954 - 4.978: 41.0934% ( 44) 00:12:00.586 4.978 - 5.001: 41.3360% ( 28) 00:12:00.586 5.001 - 5.025: 41.5353% ( 23) 00:12:00.586 5.025 - 5.049: 41.6392% ( 12) 00:12:00.586 5.049 - 5.073: 41.6999% ( 7) 00:12:00.586 5.073 - 5.096: 41.7345% ( 4) 00:12:00.586 5.096 - 5.120: 41.9078% ( 20) 00:12:00.586 5.120 - 5.144: 42.2197% ( 36) 00:12:00.586 5.144 - 5.167: 44.3857% ( 250) 00:12:00.586 5.167 - 5.191: 47.2362% ( 329) 00:12:00.586 5.191 - 5.215: 52.9544% ( 660) 00:12:00.586 5.215 - 5.239: 55.3977% ( 282) 00:12:00.586 5.239 - 5.262: 56.5413% ( 132) 00:12:00.586 5.262 - 5.286: 57.2344% ( 80) 00:12:00.586 5.286 - 5.310: 58.7593% ( 176) 00:12:00.586 5.310 - 5.333: 61.8004% ( 351) 00:12:00.586 5.333 - 5.357: 66.5569% ( 549) 00:12:00.586 5.357 - 5.381: 69.5893% ( 350) 00:12:00.586 5.381 - 5.404: 71.3395% ( 202) 00:12:00.586 5.404 - 5.428: 72.1625% ( 95) 00:12:00.586 5.428 - 5.452: 74.0253% ( 215) 00:12:00.586 5.452 - 5.476: 74.7877% ( 88) 00:12:00.586 5.476 - 5.499: 75.0823% ( 34) 00:12:00.586 5.499 - 5.523: 75.1430% ( 7) 00:12:00.586 5.523 - 5.547: 77.0837% ( 224) 00:12:00.586 5.547 - 5.570: 80.3067% ( 372) 00:12:00.586 5.570 - 5.594: 88.4509% ( 940) 00:12:00.586 5.594 - 5.618: 91.7605% ( 382) 00:12:00.586 5.618 - 5.641: 93.3894% ( 188) 00:12:00.586 5.641 - 5.665: 93.8139% ( 49) 00:12:00.586 5.665 - 5.689: 94.0738% ( 30) 00:12:00.586 5.689 - 5.713: 94.2298% ( 18) 00:12:00.586 5.713 - 5.736: 94.3251% ( 11) 00:12:00.586 5.736 - 5.760: 94.4550% ( 15) 00:12:00.586 5.760 - 5.784: 94.6023% ( 17) 00:12:00.586 5.784 - 5.807: 94.7236% ( 14) 00:12:00.586 5.807 - 5.831: 94.9229% ( 23) 00:12:00.586 5.831 - 5.855: 95.0442% ( 14) 00:12:00.586 5.855 - 5.879: 95.3214% ( 32) 00:12:00.586 5.879 - 5.902: 95.4601% ( 16) 00:12:00.586 5.902 - 5.926: 95.5467% ( 10) 00:12:00.586 5.926 - 5.950: 95.6767% ( 15) 00:12:00.586 5.950 - 5.973: 95.7286% ( 6) 00:12:00.586 5.973 - 5.997: 95.8239% ( 11) 00:12:00.586 5.997 - 6.021: 95.8499% ( 3) 00:12:00.586 6.021 - 6.044: 95.9539% ( 12) 00:12:00.586 6.044 - 6.068: 95.9799% ( 3) 00:12:00.586 6.068 - 6.116: 96.0665% ( 10) 00:12:00.586 6.116 - 6.163: 96.1965% ( 15) 00:12:00.586 6.163 - 6.210: 96.2398% ( 5) 00:12:00.586 6.210 - 6.258: 96.3265% ( 10) 00:12:00.586 6.258 - 6.305: 96.4218% ( 11) 00:12:00.586 6.305 - 6.353: 96.6210% ( 23) 00:12:00.586 6.353 - 6.400: 96.6990% ( 9) 00:12:00.586 6.400 - 6.447: 96.7250% ( 3) 00:12:00.586 6.447 - 6.495: 96.8896% ( 19) 00:12:00.586 6.495 - 6.542: 97.0889% ( 23) 00:12:00.586 6.542 - 6.590: 97.1322% ( 5) 00:12:00.586 6.590 - 6.637: 97.2362% ( 12) 00:12:00.586 6.637 - 6.684: 97.3315% ( 11) 00:12:00.586 6.684 - 6.732: 97.4181% ( 10) 00:12:00.586 6.732 - 6.779: 97.4701% ( 6) 00:12:00.586 6.779 - 6.827: 97.4874% ( 2) 00:12:00.587 6.827 - 6.874: 97.6607% ( 20) 00:12:00.587 6.874 - 6.921: 98.4058% ( 86) 00:12:00.587 6.921 - 6.969: 98.7957% ( 45) 00:12:00.587 6.969 - 7.016: 98.9950% ( 23) 00:12:00.587 7.016 - 7.064: 99.1163% ( 14) 00:12:00.587 7.064 - 7.111: 99.1769% ( 7) 00:12:00.587 7.111 - 7.159: 99.1942% ( 2) 00:12:00.587 7.159 - 7.206: 99.2116% ( 2) 00:12:00.587 7.253 - 7.301: 99.2202% ( 1) 00:12:00.587 7.301 - 7.348: 99.2289% ( 1) 00:12:00.587 8.059 - 8.107: 99.2376% ( 1) 00:12:00.587 8.296 - 8.344: 99.2549% ( 2) 00:12:00.587 8.391 - 8.439: 99.2636% ( 1) 00:12:00.587 8.533 - 8.581: 99.2722% ( 1) 00:12:00.587 8.581 - 8.628: 99.2896% ( 2) 00:12:00.587 8.628 - 8.676: 99.2982% ( 1) 00:12:00.587 8.676 - 8.723: 99.3069% ( 1) 00:12:00.587 8.865 - 8.913: 99.3155% ( 1) 00:12:00.587 8.913 - 8.960: 99.3415% ( 3) 00:12:00.587 9.055 - 9.102: 99.3502% ( 1) 00:12:00.587 9.150 - 9.197: 99.3675% ( 2) 00:12:00.587 9.197 - 9.244: 99.3762% ( 1) 00:12:00.587 9.244 - 9.292: 99.3849% ( 1) 00:12:00.587 9.434 - 9.481: 99.3935% ( 1) 00:12:00.587 9.481 - 9.529: 99.4108% ( 2) 00:12:00.587 9.529 - 9.576: 99.4195% ( 1) 00:12:00.587 9.576 - 9.624: 99.4368% ( 2) 00:12:00.587 9.624 - 9.671: 99.4455% ( 1) 00:12:00.587 9.719 - 9.766: 99.4542% ( 1) 00:12:00.587 9.813 - 9.861: 99.4628% ( 1) 00:12:00.587 9.861 - 9.908: 99.4715% ( 1) 00:12:00.587 9.908 - 9.956: 99.4802% ( 1) 00:12:00.587 10.098 - 10.145: 99.4975% ( 2) 00:12:00.587 10.240 - 10.287: 99.5062% ( 1) 00:12:00.587 10.287 - 10.335: 99.5408% ( 4) 00:12:00.587 10.335 - 10.382: 99.5495% ( 1) 00:12:00.587 10.382 - 10.430: 99.5668% ( 2) 00:12:00.587 10.430 - 10.477: 99.5755% ( 1) 00:12:00.587 10.714 - 10.761: 99.5841% ( 1) 00:12:00.587 10.761 - 10.809: 99.5928% ( 1) 00:12:00.587 10.809 - 10.856: 99.6015% ( 1) 00:12:00.587 10.856 - 10.904: 99.6101% ( 1) 00:12:00.587 10.904 - 10.951: 99.6188% ( 1) 00:12:00.587 10.951 - 10.999: 99.6274% ( 1) 00:12:00.587 11.046 - 11.093: 99.6361% ( 1) 00:12:00.587 11.283 - 11.330: 99.6448% ( 1) 00:12:00.587 11.473 - 11.520: 99.6534% ( 1) 00:12:00.587 11.567 - 11.615: 99.6621% ( 1) 00:12:00.587 11.899 - 11.947: 99.6708% ( 1) 00:12:00.587 12.136 - 12.231: 99.6794% ( 1) 00:12:00.587 12.800 - 12.895: 99.6881% ( 1) 00:12:00.587 12.990 - 13.084: 99.7054% ( 2) 00:12:00.587 13.274 - 13.369: 99.7141% ( 1) 00:12:00.587 13.464 - 13.559: 99.7228% ( 1) 00:12:00.587 13.559 - 13.653: 99.7487% ( 3) 00:12:00.587 13.653 - 13.748: 99.7747% ( 3) 00:12:00.587 13.748 - 13.843: 99.8267% ( 6) 00:12:00.587 13.843 - 13.938: 99.8527% ( 3) 00:12:00.587 13.938 - 14.033: 99.8874% ( 4) 00:12:00.587 14.222 - 14.317: 99.8960% ( 1) 00:12:00.587 14.507 - 14.601: 99.9047% ( 1) 00:12:00.587 14.601 - 14.696: 99.9134% ( 1) 00:12:00.587 16.403 - 16.498: 99.9220% ( 1) 00:12:00.587 3980.705 - 4004.978: 99.9480% ( 3) 00:12:00.587 4004.978 - 4029.250: 100.0000% ( 6) 00:12:00.587 00:12:00.587 Complete histogram 00:12:00.587 ================== 00:12:00.587 Range in us Cumulative Count 00:12:00.587 2.655 - 2.667: 2.7638% ( 319) 00:12:00.587 2.667 - 2.679: 41.3013% ( 4448) 00:12:00.587 2.679 - 2.690: 70.1871% ( 3334) 00:12:00.587 2.690 - 2.702: 74.4672% ( 494) 00:12:00.587 2.702 - 2.714: 80.4887% ( 695) 00:12:00.587 2.714 - 2.726: 88.2516% ( 896) 00:12:00.587 2.726 - 2.738: 92.6962% ( 513) 00:12:00.587 2.738 - 2.750: 95.6333% ( 339) 00:12:00.587 2.750 - 2.761: 96.4651% ( 96) 00:12:00.587 2.761 - 2.773: 96.9243% ( 53) 00:12:00.587 2.773 - 2.785: 97.4181% ( 57) 00:12:00.587 2.785 - 2.797: 97.7647% ( 40) 00:12:00.587 2.797 - 2.809: 97.9986% ( 27) 00:12:00.587 2.809 - 2.821: 98.1719% ( 20) 00:12:00.587 2.821 - 2.833: 98.3712% ( 23) 00:12:00.587 2.833 - 2.8[2024-05-15 00:48:47.268544] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:00.587 44: 98.5185% ( 17) 00:12:00.587 2.844 - 2.856: 98.6138% ( 11) 00:12:00.587 2.856 - 2.868: 98.6744% ( 7) 00:12:00.587 2.868 - 2.880: 98.7091% ( 4) 00:12:00.587 2.880 - 2.892: 98.7264% ( 2) 00:12:00.587 2.892 - 2.904: 98.7351% ( 1) 00:12:00.587 2.904 - 2.916: 98.7437% ( 1) 00:12:00.587 2.927 - 2.939: 98.7524% ( 1) 00:12:00.587 2.951 - 2.963: 98.7610% ( 1) 00:12:00.587 2.963 - 2.975: 98.7697% ( 1) 00:12:00.587 3.319 - 3.342: 98.7870% ( 2) 00:12:00.587 3.342 - 3.366: 98.8044% ( 2) 00:12:00.587 3.366 - 3.390: 98.8130% ( 1) 00:12:00.587 3.390 - 3.413: 98.8477% ( 4) 00:12:00.587 3.413 - 3.437: 98.8737% ( 3) 00:12:00.587 3.437 - 3.461: 98.8823% ( 1) 00:12:00.587 3.461 - 3.484: 98.8997% ( 2) 00:12:00.587 3.508 - 3.532: 98.9170% ( 2) 00:12:00.587 3.532 - 3.556: 98.9517% ( 4) 00:12:00.587 3.556 - 3.579: 98.9776% ( 3) 00:12:00.587 3.579 - 3.603: 99.0036% ( 3) 00:12:00.587 3.627 - 3.650: 99.0210% ( 2) 00:12:00.587 3.650 - 3.674: 99.0296% ( 1) 00:12:00.587 3.674 - 3.698: 99.0383% ( 1) 00:12:00.587 3.816 - 3.840: 99.0470% ( 1) 00:12:00.587 3.840 - 3.864: 99.0556% ( 1) 00:12:00.587 4.101 - 4.124: 99.0643% ( 1) 00:12:00.587 4.148 - 4.172: 99.0730% ( 1) 00:12:00.587 4.290 - 4.314: 99.0816% ( 1) 00:12:00.587 4.504 - 4.527: 99.0903% ( 1) 00:12:00.587 5.381 - 5.404: 99.0989% ( 1) 00:12:00.587 6.116 - 6.163: 99.1076% ( 1) 00:12:00.587 6.210 - 6.258: 99.1163% ( 1) 00:12:00.587 6.921 - 6.969: 99.1249% ( 1) 00:12:00.587 7.206 - 7.253: 99.1336% ( 1) 00:12:00.587 7.253 - 7.301: 99.1423% ( 1) 00:12:00.587 7.680 - 7.727: 99.1509% ( 1) 00:12:00.587 7.727 - 7.775: 99.1596% ( 1) 00:12:00.587 7.822 - 7.870: 99.1683% ( 1) 00:12:00.587 8.486 - 8.533: 99.1769% ( 1) 00:12:00.587 8.676 - 8.723: 99.1856% ( 1) 00:12:00.587 8.865 - 8.913: 99.1942% ( 1) 00:12:00.587 8.913 - 8.960: 99.2029% ( 1) 00:12:00.587 8.960 - 9.007: 99.2116% ( 1) 00:12:00.587 9.481 - 9.529: 99.2289% ( 2) 00:12:00.587 9.861 - 9.908: 99.2376% ( 1) 00:12:00.587 10.145 - 10.193: 99.2462% ( 1) 00:12:00.587 10.287 - 10.335: 99.2549% ( 1) 00:12:00.587 10.619 - 10.667: 99.2636% ( 1) 00:12:00.587 10.667 - 10.714: 99.2722% ( 1) 00:12:00.587 1195.425 - 1201.493: 99.2809% ( 1) 00:12:00.587 3665.161 - 3689.434: 99.2896% ( 1) 00:12:00.587 3980.705 - 4004.978: 99.6881% ( 46) 00:12:00.587 4004.978 - 4029.250: 99.9827% ( 34) 00:12:00.587 6990.507 - 7039.052: 100.0000% ( 2) 00:12:00.587 00:12:00.587 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:00.587 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:00.587 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:00.587 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:00.587 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:00.587 [ 00:12:00.587 { 00:12:00.587 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:00.587 "subtype": "Discovery", 00:12:00.587 "listen_addresses": [], 00:12:00.587 "allow_any_host": true, 00:12:00.587 "hosts": [] 00:12:00.587 }, 00:12:00.587 { 00:12:00.587 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:00.587 "subtype": "NVMe", 00:12:00.587 "listen_addresses": [ 00:12:00.587 { 00:12:00.587 "trtype": "VFIOUSER", 00:12:00.587 "adrfam": "IPv4", 00:12:00.587 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:00.587 "trsvcid": "0" 00:12:00.587 } 00:12:00.587 ], 00:12:00.587 "allow_any_host": true, 00:12:00.587 "hosts": [], 00:12:00.587 "serial_number": "SPDK1", 00:12:00.587 "model_number": "SPDK bdev Controller", 00:12:00.587 "max_namespaces": 32, 00:12:00.587 "min_cntlid": 1, 00:12:00.587 "max_cntlid": 65519, 00:12:00.587 "namespaces": [ 00:12:00.587 { 00:12:00.587 "nsid": 1, 00:12:00.587 "bdev_name": "Malloc1", 00:12:00.587 "name": "Malloc1", 00:12:00.587 "nguid": "0C792F67019A4709B1A2A6E4102CC468", 00:12:00.588 "uuid": "0c792f67-019a-4709-b1a2-a6e4102cc468" 00:12:00.588 } 00:12:00.588 ] 00:12:00.588 }, 00:12:00.588 { 00:12:00.588 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:00.588 "subtype": "NVMe", 00:12:00.588 "listen_addresses": [ 00:12:00.588 { 00:12:00.588 "trtype": "VFIOUSER", 00:12:00.588 "adrfam": "IPv4", 00:12:00.588 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:00.588 "trsvcid": "0" 00:12:00.588 } 00:12:00.588 ], 00:12:00.588 "allow_any_host": true, 00:12:00.588 "hosts": [], 00:12:00.588 "serial_number": "SPDK2", 00:12:00.588 "model_number": "SPDK bdev Controller", 00:12:00.588 "max_namespaces": 32, 00:12:00.588 "min_cntlid": 1, 00:12:00.588 "max_cntlid": 65519, 00:12:00.588 "namespaces": [ 00:12:00.588 { 00:12:00.588 "nsid": 1, 00:12:00.588 "bdev_name": "Malloc2", 00:12:00.588 "name": "Malloc2", 00:12:00.588 "nguid": "77908B3610304EF99AB609B8DD11D9FD", 00:12:00.588 "uuid": "77908b36-1030-4ef9-9ab6-09b8dd11d9fd" 00:12:00.588 } 00:12:00.588 ] 00:12:00.588 } 00:12:00.588 ] 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3983004 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:00.588 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:00.845 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.845 [2024-05-15 00:48:47.784528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.103 Malloc3 00:12:01.103 00:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:01.360 [2024-05-15 00:48:48.238055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.360 00:48:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:01.360 Asynchronous Event Request test 00:12:01.360 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.360 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.360 Registering asynchronous event callbacks... 00:12:01.360 Starting namespace attribute notice tests for all controllers... 00:12:01.360 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:01.360 aer_cb - Changed Namespace 00:12:01.360 Cleaning up... 00:12:01.618 [ 00:12:01.618 { 00:12:01.618 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:01.618 "subtype": "Discovery", 00:12:01.618 "listen_addresses": [], 00:12:01.618 "allow_any_host": true, 00:12:01.618 "hosts": [] 00:12:01.618 }, 00:12:01.618 { 00:12:01.618 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:01.618 "subtype": "NVMe", 00:12:01.618 "listen_addresses": [ 00:12:01.618 { 00:12:01.618 "trtype": "VFIOUSER", 00:12:01.618 "adrfam": "IPv4", 00:12:01.618 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:01.618 "trsvcid": "0" 00:12:01.618 } 00:12:01.618 ], 00:12:01.618 "allow_any_host": true, 00:12:01.618 "hosts": [], 00:12:01.618 "serial_number": "SPDK1", 00:12:01.618 "model_number": "SPDK bdev Controller", 00:12:01.618 "max_namespaces": 32, 00:12:01.618 "min_cntlid": 1, 00:12:01.618 "max_cntlid": 65519, 00:12:01.618 "namespaces": [ 00:12:01.618 { 00:12:01.618 "nsid": 1, 00:12:01.618 "bdev_name": "Malloc1", 00:12:01.618 "name": "Malloc1", 00:12:01.618 "nguid": "0C792F67019A4709B1A2A6E4102CC468", 00:12:01.618 "uuid": "0c792f67-019a-4709-b1a2-a6e4102cc468" 00:12:01.618 }, 00:12:01.618 { 00:12:01.618 "nsid": 2, 00:12:01.618 "bdev_name": "Malloc3", 00:12:01.618 "name": "Malloc3", 00:12:01.618 "nguid": "8452109985D14FE68656FADC5275FAD5", 00:12:01.618 "uuid": "84521099-85d1-4fe6-8656-fadc5275fad5" 00:12:01.618 } 00:12:01.618 ] 00:12:01.618 }, 00:12:01.618 { 00:12:01.618 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:01.618 "subtype": "NVMe", 00:12:01.618 "listen_addresses": [ 00:12:01.618 { 00:12:01.618 "trtype": "VFIOUSER", 00:12:01.618 "adrfam": "IPv4", 00:12:01.618 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:01.618 "trsvcid": "0" 00:12:01.618 } 00:12:01.618 ], 00:12:01.618 "allow_any_host": true, 00:12:01.618 "hosts": [], 00:12:01.618 "serial_number": "SPDK2", 00:12:01.618 "model_number": "SPDK bdev Controller", 00:12:01.618 "max_namespaces": 32, 00:12:01.618 "min_cntlid": 1, 00:12:01.618 "max_cntlid": 65519, 00:12:01.618 "namespaces": [ 00:12:01.618 { 00:12:01.618 "nsid": 1, 00:12:01.618 "bdev_name": "Malloc2", 00:12:01.618 "name": "Malloc2", 00:12:01.618 "nguid": "77908B3610304EF99AB609B8DD11D9FD", 00:12:01.618 "uuid": "77908b36-1030-4ef9-9ab6-09b8dd11d9fd" 00:12:01.618 } 00:12:01.618 ] 00:12:01.618 } 00:12:01.618 ] 00:12:01.618 00:48:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3983004 00:12:01.618 00:48:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:01.618 00:48:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:01.618 00:48:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:01.619 00:48:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:01.619 [2024-05-15 00:48:48.523676] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:12:01.619 [2024-05-15 00:48:48.523725] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983105 ] 00:12:01.619 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.619 [2024-05-15 00:48:48.565834] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:01.619 [2024-05-15 00:48:48.568223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:01.619 [2024-05-15 00:48:48.568255] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9744b15000 00:12:01.619 [2024-05-15 00:48:48.569217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.619 [2024-05-15 00:48:48.570218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.619 [2024-05-15 00:48:48.571227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.619 [2024-05-15 00:48:48.572234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.619 [2024-05-15 00:48:48.573246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.619 [2024-05-15 00:48:48.574257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.619 [2024-05-15 00:48:48.575267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.619 [2024-05-15 00:48:48.576268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.619 [2024-05-15 00:48:48.577296] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:01.619 [2024-05-15 00:48:48.577324] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9744b0a000 00:12:01.619 [2024-05-15 00:48:48.578779] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:01.619 [2024-05-15 00:48:48.598787] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:01.619 [2024-05-15 00:48:48.598828] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:01.619 [2024-05-15 00:48:48.600942] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:01.619 [2024-05-15 00:48:48.601004] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:01.619 [2024-05-15 00:48:48.601116] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:01.619 [2024-05-15 00:48:48.601145] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:01.619 [2024-05-15 00:48:48.601159] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:01.619 [2024-05-15 00:48:48.602946] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:01.619 [2024-05-15 00:48:48.602969] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:01.619 [2024-05-15 00:48:48.602991] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:01.619 [2024-05-15 00:48:48.603970] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:01.619 [2024-05-15 00:48:48.603997] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:01.619 [2024-05-15 00:48:48.604013] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:01.619 [2024-05-15 00:48:48.604964] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:01.619 [2024-05-15 00:48:48.604987] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:01.619 [2024-05-15 00:48:48.605974] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:01.619 [2024-05-15 00:48:48.605996] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:01.619 [2024-05-15 00:48:48.606006] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:01.619 [2024-05-15 00:48:48.606020] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:01.619 [2024-05-15 00:48:48.606131] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:01.619 [2024-05-15 00:48:48.606140] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:01.619 [2024-05-15 00:48:48.606150] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:01.619 [2024-05-15 00:48:48.606983] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:01.619 [2024-05-15 00:48:48.607989] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:01.619 [2024-05-15 00:48:48.609000] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:01.619 [2024-05-15 00:48:48.610001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:01.619 [2024-05-15 00:48:48.610084] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:01.619 [2024-05-15 00:48:48.611020] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:01.619 [2024-05-15 00:48:48.611048] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:01.619 [2024-05-15 00:48:48.611064] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.611093] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:01.619 [2024-05-15 00:48:48.611113] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.611142] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.619 [2024-05-15 00:48:48.611153] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.619 [2024-05-15 00:48:48.611176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.619 [2024-05-15 00:48:48.621956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:01.619 [2024-05-15 00:48:48.621989] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:01.619 [2024-05-15 00:48:48.622000] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:01.619 [2024-05-15 00:48:48.622009] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:01.619 [2024-05-15 00:48:48.622018] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:01.619 [2024-05-15 00:48:48.622027] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:01.619 [2024-05-15 00:48:48.622041] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:01.619 [2024-05-15 00:48:48.622050] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.622070] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.622092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:01.619 [2024-05-15 00:48:48.629953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:01.619 [2024-05-15 00:48:48.629980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.619 [2024-05-15 00:48:48.629996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.619 [2024-05-15 00:48:48.630010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.619 [2024-05-15 00:48:48.630025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.619 [2024-05-15 00:48:48.630035] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.630053] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.630070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:01.619 [2024-05-15 00:48:48.637953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:01.619 [2024-05-15 00:48:48.637976] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:01.619 [2024-05-15 00:48:48.637988] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.638001] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.638017] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.638034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:01.619 [2024-05-15 00:48:48.645950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:01.619 [2024-05-15 00:48:48.646030] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.646049] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:01.619 [2024-05-15 00:48:48.646065] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:01.619 [2024-05-15 00:48:48.646075] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:01.620 [2024-05-15 00:48:48.646086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:01.620 [2024-05-15 00:48:48.653952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:01.620 [2024-05-15 00:48:48.653984] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:01.620 [2024-05-15 00:48:48.654002] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.654019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.654033] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.620 [2024-05-15 00:48:48.654043] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.620 [2024-05-15 00:48:48.654055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.620 [2024-05-15 00:48:48.661947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:01.620 [2024-05-15 00:48:48.661982] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.661999] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.662014] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.620 [2024-05-15 00:48:48.662023] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.620 [2024-05-15 00:48:48.662035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.620 [2024-05-15 00:48:48.669951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:01.620 [2024-05-15 00:48:48.669988] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.670010] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.670026] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.670038] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.670048] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.670057] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:01.620 [2024-05-15 00:48:48.670066] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:01.620 [2024-05-15 00:48:48.670076] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:01.620 [2024-05-15 00:48:48.670111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:01.879 [2024-05-15 00:48:48.677962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:01.879 [2024-05-15 00:48:48.677990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:01.879 [2024-05-15 00:48:48.685944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:01.879 [2024-05-15 00:48:48.685972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:01.879 [2024-05-15 00:48:48.693944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:01.879 [2024-05-15 00:48:48.693971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:01.879 [2024-05-15 00:48:48.701944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:01.879 [2024-05-15 00:48:48.701982] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:01.879 [2024-05-15 00:48:48.701993] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:01.879 [2024-05-15 00:48:48.702001] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:01.879 [2024-05-15 00:48:48.702008] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:01.879 [2024-05-15 00:48:48.702019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:01.879 [2024-05-15 00:48:48.702033] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:01.879 [2024-05-15 00:48:48.702042] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:01.879 [2024-05-15 00:48:48.702053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:01.879 [2024-05-15 00:48:48.702065] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:01.879 [2024-05-15 00:48:48.702075] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.879 [2024-05-15 00:48:48.702085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.879 [2024-05-15 00:48:48.702108] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:01.879 [2024-05-15 00:48:48.702119] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:01.879 [2024-05-15 00:48:48.702130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:01.879 [2024-05-15 00:48:48.709950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:01.879 [2024-05-15 00:48:48.709987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:01.879 [2024-05-15 00:48:48.710005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:01.879 [2024-05-15 00:48:48.710021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:01.879 ===================================================== 00:12:01.879 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:01.879 ===================================================== 00:12:01.879 Controller Capabilities/Features 00:12:01.879 ================================ 00:12:01.879 Vendor ID: 4e58 00:12:01.879 Subsystem Vendor ID: 4e58 00:12:01.879 Serial Number: SPDK2 00:12:01.879 Model Number: SPDK bdev Controller 00:12:01.879 Firmware Version: 24.05 00:12:01.879 Recommended Arb Burst: 6 00:12:01.879 IEEE OUI Identifier: 8d 6b 50 00:12:01.879 Multi-path I/O 00:12:01.879 May have multiple subsystem ports: Yes 00:12:01.879 May have multiple controllers: Yes 00:12:01.879 Associated with SR-IOV VF: No 00:12:01.879 Max Data Transfer Size: 131072 00:12:01.879 Max Number of Namespaces: 32 00:12:01.879 Max Number of I/O Queues: 127 00:12:01.879 NVMe Specification Version (VS): 1.3 00:12:01.879 NVMe Specification Version (Identify): 1.3 00:12:01.879 Maximum Queue Entries: 256 00:12:01.879 Contiguous Queues Required: Yes 00:12:01.879 Arbitration Mechanisms Supported 00:12:01.879 Weighted Round Robin: Not Supported 00:12:01.879 Vendor Specific: Not Supported 00:12:01.879 Reset Timeout: 15000 ms 00:12:01.879 Doorbell Stride: 4 bytes 00:12:01.879 NVM Subsystem Reset: Not Supported 00:12:01.879 Command Sets Supported 00:12:01.879 NVM Command Set: Supported 00:12:01.879 Boot Partition: Not Supported 00:12:01.879 Memory Page Size Minimum: 4096 bytes 00:12:01.879 Memory Page Size Maximum: 4096 bytes 00:12:01.879 Persistent Memory Region: Not Supported 00:12:01.879 Optional Asynchronous Events Supported 00:12:01.879 Namespace Attribute Notices: Supported 00:12:01.879 Firmware Activation Notices: Not Supported 00:12:01.879 ANA Change Notices: Not Supported 00:12:01.879 PLE Aggregate Log Change Notices: Not Supported 00:12:01.879 LBA Status Info Alert Notices: Not Supported 00:12:01.879 EGE Aggregate Log Change Notices: Not Supported 00:12:01.879 Normal NVM Subsystem Shutdown event: Not Supported 00:12:01.879 Zone Descriptor Change Notices: Not Supported 00:12:01.879 Discovery Log Change Notices: Not Supported 00:12:01.879 Controller Attributes 00:12:01.879 128-bit Host Identifier: Supported 00:12:01.879 Non-Operational Permissive Mode: Not Supported 00:12:01.879 NVM Sets: Not Supported 00:12:01.879 Read Recovery Levels: Not Supported 00:12:01.879 Endurance Groups: Not Supported 00:12:01.879 Predictable Latency Mode: Not Supported 00:12:01.879 Traffic Based Keep ALive: Not Supported 00:12:01.879 Namespace Granularity: Not Supported 00:12:01.879 SQ Associations: Not Supported 00:12:01.879 UUID List: Not Supported 00:12:01.879 Multi-Domain Subsystem: Not Supported 00:12:01.879 Fixed Capacity Management: Not Supported 00:12:01.879 Variable Capacity Management: Not Supported 00:12:01.879 Delete Endurance Group: Not Supported 00:12:01.879 Delete NVM Set: Not Supported 00:12:01.879 Extended LBA Formats Supported: Not Supported 00:12:01.879 Flexible Data Placement Supported: Not Supported 00:12:01.879 00:12:01.879 Controller Memory Buffer Support 00:12:01.879 ================================ 00:12:01.879 Supported: No 00:12:01.879 00:12:01.879 Persistent Memory Region Support 00:12:01.879 ================================ 00:12:01.879 Supported: No 00:12:01.879 00:12:01.879 Admin Command Set Attributes 00:12:01.879 ============================ 00:12:01.879 Security Send/Receive: Not Supported 00:12:01.879 Format NVM: Not Supported 00:12:01.879 Firmware Activate/Download: Not Supported 00:12:01.879 Namespace Management: Not Supported 00:12:01.879 Device Self-Test: Not Supported 00:12:01.879 Directives: Not Supported 00:12:01.879 NVMe-MI: Not Supported 00:12:01.879 Virtualization Management: Not Supported 00:12:01.879 Doorbell Buffer Config: Not Supported 00:12:01.880 Get LBA Status Capability: Not Supported 00:12:01.880 Command & Feature Lockdown Capability: Not Supported 00:12:01.880 Abort Command Limit: 4 00:12:01.880 Async Event Request Limit: 4 00:12:01.880 Number of Firmware Slots: N/A 00:12:01.880 Firmware Slot 1 Read-Only: N/A 00:12:01.880 Firmware Activation Without Reset: N/A 00:12:01.880 Multiple Update Detection Support: N/A 00:12:01.880 Firmware Update Granularity: No Information Provided 00:12:01.880 Per-Namespace SMART Log: No 00:12:01.880 Asymmetric Namespace Access Log Page: Not Supported 00:12:01.880 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:01.880 Command Effects Log Page: Supported 00:12:01.880 Get Log Page Extended Data: Supported 00:12:01.880 Telemetry Log Pages: Not Supported 00:12:01.880 Persistent Event Log Pages: Not Supported 00:12:01.880 Supported Log Pages Log Page: May Support 00:12:01.880 Commands Supported & Effects Log Page: Not Supported 00:12:01.880 Feature Identifiers & Effects Log Page:May Support 00:12:01.880 NVMe-MI Commands & Effects Log Page: May Support 00:12:01.880 Data Area 4 for Telemetry Log: Not Supported 00:12:01.880 Error Log Page Entries Supported: 128 00:12:01.880 Keep Alive: Supported 00:12:01.880 Keep Alive Granularity: 10000 ms 00:12:01.880 00:12:01.880 NVM Command Set Attributes 00:12:01.880 ========================== 00:12:01.880 Submission Queue Entry Size 00:12:01.880 Max: 64 00:12:01.880 Min: 64 00:12:01.880 Completion Queue Entry Size 00:12:01.880 Max: 16 00:12:01.880 Min: 16 00:12:01.880 Number of Namespaces: 32 00:12:01.880 Compare Command: Supported 00:12:01.880 Write Uncorrectable Command: Not Supported 00:12:01.880 Dataset Management Command: Supported 00:12:01.880 Write Zeroes Command: Supported 00:12:01.880 Set Features Save Field: Not Supported 00:12:01.880 Reservations: Not Supported 00:12:01.880 Timestamp: Not Supported 00:12:01.880 Copy: Supported 00:12:01.880 Volatile Write Cache: Present 00:12:01.880 Atomic Write Unit (Normal): 1 00:12:01.880 Atomic Write Unit (PFail): 1 00:12:01.880 Atomic Compare & Write Unit: 1 00:12:01.880 Fused Compare & Write: Supported 00:12:01.880 Scatter-Gather List 00:12:01.880 SGL Command Set: Supported (Dword aligned) 00:12:01.880 SGL Keyed: Not Supported 00:12:01.880 SGL Bit Bucket Descriptor: Not Supported 00:12:01.880 SGL Metadata Pointer: Not Supported 00:12:01.880 Oversized SGL: Not Supported 00:12:01.880 SGL Metadata Address: Not Supported 00:12:01.880 SGL Offset: Not Supported 00:12:01.880 Transport SGL Data Block: Not Supported 00:12:01.880 Replay Protected Memory Block: Not Supported 00:12:01.880 00:12:01.880 Firmware Slot Information 00:12:01.880 ========================= 00:12:01.880 Active slot: 1 00:12:01.880 Slot 1 Firmware Revision: 24.05 00:12:01.880 00:12:01.880 00:12:01.880 Commands Supported and Effects 00:12:01.880 ============================== 00:12:01.880 Admin Commands 00:12:01.880 -------------- 00:12:01.880 Get Log Page (02h): Supported 00:12:01.880 Identify (06h): Supported 00:12:01.880 Abort (08h): Supported 00:12:01.880 Set Features (09h): Supported 00:12:01.880 Get Features (0Ah): Supported 00:12:01.880 Asynchronous Event Request (0Ch): Supported 00:12:01.880 Keep Alive (18h): Supported 00:12:01.880 I/O Commands 00:12:01.880 ------------ 00:12:01.880 Flush (00h): Supported LBA-Change 00:12:01.880 Write (01h): Supported LBA-Change 00:12:01.880 Read (02h): Supported 00:12:01.880 Compare (05h): Supported 00:12:01.880 Write Zeroes (08h): Supported LBA-Change 00:12:01.880 Dataset Management (09h): Supported LBA-Change 00:12:01.880 Copy (19h): Supported LBA-Change 00:12:01.880 Unknown (79h): Supported LBA-Change 00:12:01.880 Unknown (7Ah): Supported 00:12:01.880 00:12:01.880 Error Log 00:12:01.880 ========= 00:12:01.880 00:12:01.880 Arbitration 00:12:01.880 =========== 00:12:01.880 Arbitration Burst: 1 00:12:01.880 00:12:01.880 Power Management 00:12:01.880 ================ 00:12:01.880 Number of Power States: 1 00:12:01.880 Current Power State: Power State #0 00:12:01.880 Power State #0: 00:12:01.880 Max Power: 0.00 W 00:12:01.880 Non-Operational State: Operational 00:12:01.880 Entry Latency: Not Reported 00:12:01.880 Exit Latency: Not Reported 00:12:01.880 Relative Read Throughput: 0 00:12:01.880 Relative Read Latency: 0 00:12:01.880 Relative Write Throughput: 0 00:12:01.880 Relative Write Latency: 0 00:12:01.880 Idle Power: Not Reported 00:12:01.880 Active Power: Not Reported 00:12:01.880 Non-Operational Permissive Mode: Not Supported 00:12:01.880 00:12:01.880 Health Information 00:12:01.880 ================== 00:12:01.880 Critical Warnings: 00:12:01.880 Available Spare Space: OK 00:12:01.880 Temperature: OK 00:12:01.880 Device Reliability: OK 00:12:01.880 Read Only: No 00:12:01.880 Volatile Memory Backup: OK 00:12:01.880 Current Temperature: 0 Kelvin (-2[2024-05-15 00:48:48.710166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:01.880 [2024-05-15 00:48:48.717951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:01.880 [2024-05-15 00:48:48.718009] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:01.880 [2024-05-15 00:48:48.718029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.880 [2024-05-15 00:48:48.718041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.880 [2024-05-15 00:48:48.718053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.880 [2024-05-15 00:48:48.718064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.880 [2024-05-15 00:48:48.718150] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:01.880 [2024-05-15 00:48:48.718174] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:01.880 [2024-05-15 00:48:48.719152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:01.880 [2024-05-15 00:48:48.719232] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:01.880 [2024-05-15 00:48:48.719248] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:01.880 [2024-05-15 00:48:48.720160] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:01.880 [2024-05-15 00:48:48.720186] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:01.880 [2024-05-15 00:48:48.720262] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:01.880 [2024-05-15 00:48:48.721786] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:01.880 73 Celsius) 00:12:01.880 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:01.880 Available Spare: 0% 00:12:01.880 Available Spare Threshold: 0% 00:12:01.880 Life Percentage Used: 0% 00:12:01.880 Data Units Read: 0 00:12:01.880 Data Units Written: 0 00:12:01.880 Host Read Commands: 0 00:12:01.880 Host Write Commands: 0 00:12:01.880 Controller Busy Time: 0 minutes 00:12:01.880 Power Cycles: 0 00:12:01.880 Power On Hours: 0 hours 00:12:01.880 Unsafe Shutdowns: 0 00:12:01.880 Unrecoverable Media Errors: 0 00:12:01.880 Lifetime Error Log Entries: 0 00:12:01.880 Warning Temperature Time: 0 minutes 00:12:01.880 Critical Temperature Time: 0 minutes 00:12:01.880 00:12:01.880 Number of Queues 00:12:01.880 ================ 00:12:01.880 Number of I/O Submission Queues: 127 00:12:01.880 Number of I/O Completion Queues: 127 00:12:01.880 00:12:01.880 Active Namespaces 00:12:01.880 ================= 00:12:01.880 Namespace ID:1 00:12:01.880 Error Recovery Timeout: Unlimited 00:12:01.880 Command Set Identifier: NVM (00h) 00:12:01.880 Deallocate: Supported 00:12:01.880 Deallocated/Unwritten Error: Not Supported 00:12:01.880 Deallocated Read Value: Unknown 00:12:01.880 Deallocate in Write Zeroes: Not Supported 00:12:01.880 Deallocated Guard Field: 0xFFFF 00:12:01.880 Flush: Supported 00:12:01.880 Reservation: Supported 00:12:01.880 Namespace Sharing Capabilities: Multiple Controllers 00:12:01.880 Size (in LBAs): 131072 (0GiB) 00:12:01.880 Capacity (in LBAs): 131072 (0GiB) 00:12:01.880 Utilization (in LBAs): 131072 (0GiB) 00:12:01.880 NGUID: 77908B3610304EF99AB609B8DD11D9FD 00:12:01.880 UUID: 77908b36-1030-4ef9-9ab6-09b8dd11d9fd 00:12:01.880 Thin Provisioning: Not Supported 00:12:01.880 Per-NS Atomic Units: Yes 00:12:01.880 Atomic Boundary Size (Normal): 0 00:12:01.880 Atomic Boundary Size (PFail): 0 00:12:01.880 Atomic Boundary Offset: 0 00:12:01.880 Maximum Single Source Range Length: 65535 00:12:01.880 Maximum Copy Length: 65535 00:12:01.880 Maximum Source Range Count: 1 00:12:01.880 NGUID/EUI64 Never Reused: No 00:12:01.880 Namespace Write Protected: No 00:12:01.880 Number of LBA Formats: 1 00:12:01.880 Current LBA Format: LBA Format #00 00:12:01.880 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:01.880 00:12:01.881 00:48:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:01.881 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.138 [2024-05-15 00:48:48.945360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:07.401 Initializing NVMe Controllers 00:12:07.401 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:07.401 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:07.402 Initialization complete. Launching workers. 00:12:07.402 ======================================================== 00:12:07.402 Latency(us) 00:12:07.402 Device Information : IOPS MiB/s Average min max 00:12:07.402 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24104.80 94.16 5310.97 1467.56 10562.85 00:12:07.402 ======================================================== 00:12:07.402 Total : 24104.80 94.16 5310.97 1467.56 10562.85 00:12:07.402 00:12:07.402 [2024-05-15 00:48:54.050268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:07.402 00:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:07.402 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.402 [2024-05-15 00:48:54.278971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:12.714 Initializing NVMe Controllers 00:12:12.714 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:12.714 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:12.714 Initialization complete. Launching workers. 00:12:12.714 ======================================================== 00:12:12.714 Latency(us) 00:12:12.714 Device Information : IOPS MiB/s Average min max 00:12:12.714 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24111.81 94.19 5308.55 1474.94 10556.28 00:12:12.714 ======================================================== 00:12:12.714 Total : 24111.81 94.19 5308.55 1474.94 10556.28 00:12:12.714 00:12:12.714 [2024-05-15 00:48:59.301893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:12.714 00:48:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:12.714 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.714 [2024-05-15 00:48:59.534485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:17.981 [2024-05-15 00:49:04.678059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:17.981 Initializing NVMe Controllers 00:12:17.981 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:17.981 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:17.981 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:17.981 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:17.981 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:17.981 Initialization complete. Launching workers. 00:12:17.981 Starting thread on core 2 00:12:17.981 Starting thread on core 3 00:12:17.981 Starting thread on core 1 00:12:17.981 00:49:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:17.981 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.981 [2024-05-15 00:49:04.969462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:21.346 [2024-05-15 00:49:08.040727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:21.346 Initializing NVMe Controllers 00:12:21.346 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.346 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.346 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:21.346 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:21.346 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:21.346 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:21.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:21.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:21.346 Initialization complete. Launching workers. 00:12:21.346 Starting thread on core 1 with urgent priority queue 00:12:21.346 Starting thread on core 2 with urgent priority queue 00:12:21.346 Starting thread on core 3 with urgent priority queue 00:12:21.346 Starting thread on core 0 with urgent priority queue 00:12:21.346 SPDK bdev Controller (SPDK2 ) core 0: 7135.00 IO/s 14.02 secs/100000 ios 00:12:21.346 SPDK bdev Controller (SPDK2 ) core 1: 7262.67 IO/s 13.77 secs/100000 ios 00:12:21.346 SPDK bdev Controller (SPDK2 ) core 2: 6801.67 IO/s 14.70 secs/100000 ios 00:12:21.346 SPDK bdev Controller (SPDK2 ) core 3: 8385.00 IO/s 11.93 secs/100000 ios 00:12:21.346 ======================================================== 00:12:21.346 00:12:21.346 00:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:21.346 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.346 [2024-05-15 00:49:08.324594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:21.346 Initializing NVMe Controllers 00:12:21.346 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.346 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.346 Namespace ID: 1 size: 0GB 00:12:21.346 Initialization complete. 00:12:21.346 INFO: using host memory buffer for IO 00:12:21.346 Hello world! 00:12:21.346 [2024-05-15 00:49:08.337743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:21.346 00:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:21.602 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.602 [2024-05-15 00:49:08.610589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:22.974 Initializing NVMe Controllers 00:12:22.974 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:22.974 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:22.974 Initialization complete. Launching workers. 00:12:22.974 submit (in ns) avg, min, max = 8513.7, 4485.9, 4016250.4 00:12:22.974 complete (in ns) avg, min, max = 30920.3, 2650.4, 6993644.4 00:12:22.974 00:12:22.974 Submit histogram 00:12:22.974 ================ 00:12:22.974 Range in us Cumulative Count 00:12:22.974 4.480 - 4.504: 0.0767% ( 9) 00:12:22.974 4.504 - 4.527: 0.5026% ( 50) 00:12:22.974 4.527 - 4.551: 2.2150% ( 201) 00:12:22.974 4.551 - 4.575: 5.7420% ( 414) 00:12:22.974 4.575 - 4.599: 10.7599% ( 589) 00:12:22.974 4.599 - 4.622: 15.2922% ( 532) 00:12:22.974 4.622 - 4.646: 18.5466% ( 382) 00:12:22.974 4.646 - 4.670: 19.7052% ( 136) 00:12:22.974 4.670 - 4.693: 20.4549% ( 88) 00:12:22.974 4.693 - 4.717: 21.4176% ( 113) 00:12:22.974 4.717 - 4.741: 24.1779% ( 324) 00:12:22.974 4.741 - 4.764: 28.9743% ( 563) 00:12:22.974 4.764 - 4.788: 35.7812% ( 799) 00:12:22.974 4.788 - 4.812: 42.0429% ( 735) 00:12:22.974 4.812 - 4.836: 45.0332% ( 351) 00:12:22.974 4.836 - 4.859: 46.3282% ( 152) 00:12:22.974 4.859 - 4.883: 46.9330% ( 71) 00:12:22.974 4.883 - 4.907: 47.9042% ( 114) 00:12:22.974 4.907 - 4.930: 49.1566% ( 147) 00:12:22.974 4.930 - 4.954: 50.5793% ( 167) 00:12:22.974 4.954 - 4.978: 52.7773% ( 258) 00:12:22.974 4.978 - 5.001: 54.1574% ( 162) 00:12:22.974 5.001 - 5.025: 55.3161% ( 136) 00:12:22.974 5.025 - 5.049: 56.1510% ( 98) 00:12:22.974 5.049 - 5.073: 56.4832% ( 39) 00:12:22.974 5.073 - 5.096: 56.6025% ( 14) 00:12:22.974 5.096 - 5.120: 56.8070% ( 24) 00:12:22.974 5.120 - 5.144: 57.3352% ( 62) 00:12:22.974 5.144 - 5.167: 59.4224% ( 245) 00:12:22.974 5.167 - 5.191: 62.2508% ( 332) 00:12:22.974 5.191 - 5.215: 67.8139% ( 653) 00:12:22.974 5.215 - 5.239: 70.4038% ( 304) 00:12:22.974 5.239 - 5.262: 71.9117% ( 177) 00:12:22.974 5.262 - 5.286: 72.6870% ( 91) 00:12:22.974 5.286 - 5.310: 73.5645% ( 103) 00:12:22.974 5.310 - 5.333: 75.1917% ( 191) 00:12:22.974 5.333 - 5.357: 78.0371% ( 334) 00:12:22.974 5.357 - 5.381: 79.9114% ( 220) 00:12:22.974 5.381 - 5.404: 81.3001% ( 163) 00:12:22.974 5.404 - 5.428: 81.9220% ( 73) 00:12:22.974 5.428 - 5.452: 83.0039% ( 127) 00:12:22.974 5.452 - 5.476: 83.4895% ( 57) 00:12:22.974 5.476 - 5.499: 83.7195% ( 27) 00:12:22.974 5.499 - 5.523: 83.8218% ( 12) 00:12:22.974 5.523 - 5.547: 84.8185% ( 117) 00:12:22.974 5.547 - 5.570: 86.4117% ( 187) 00:12:22.974 5.570 - 5.594: 90.6884% ( 502) 00:12:22.974 5.594 - 5.618: 92.6734% ( 233) 00:12:22.974 5.618 - 5.641: 94.1046% ( 168) 00:12:22.974 5.641 - 5.665: 94.3943% ( 34) 00:12:22.974 5.665 - 5.689: 94.5902% ( 23) 00:12:22.974 5.689 - 5.713: 94.7350% ( 17) 00:12:22.974 5.713 - 5.736: 94.8117% ( 9) 00:12:22.974 5.736 - 5.760: 94.8799% ( 8) 00:12:22.975 5.760 - 5.784: 94.8969% ( 2) 00:12:22.975 5.784 - 5.807: 94.9395% ( 5) 00:12:22.975 5.807 - 5.831: 94.9991% ( 7) 00:12:22.975 5.831 - 5.855: 95.0843% ( 10) 00:12:22.975 5.855 - 5.879: 95.1866% ( 12) 00:12:22.975 5.879 - 5.902: 95.3058% ( 14) 00:12:22.975 5.902 - 5.926: 95.3740% ( 8) 00:12:22.975 5.926 - 5.950: 95.4677% ( 11) 00:12:22.975 5.950 - 5.973: 95.5018% ( 4) 00:12:22.975 5.973 - 5.997: 95.5699% ( 8) 00:12:22.975 5.997 - 6.021: 95.6381% ( 8) 00:12:22.975 6.021 - 6.044: 95.7488% ( 13) 00:12:22.975 6.044 - 6.068: 95.8000% ( 6) 00:12:22.975 6.068 - 6.116: 96.0300% ( 27) 00:12:22.975 6.116 - 6.163: 96.4645% ( 51) 00:12:22.975 6.163 - 6.210: 96.5752% ( 13) 00:12:22.975 6.210 - 6.258: 96.8734% ( 35) 00:12:22.975 6.258 - 6.305: 97.4612% ( 69) 00:12:22.975 6.305 - 6.353: 97.5890% ( 15) 00:12:22.975 6.353 - 6.400: 97.6316% ( 5) 00:12:22.975 6.400 - 6.447: 97.6913% ( 7) 00:12:22.975 6.447 - 6.495: 97.8105% ( 14) 00:12:22.975 6.495 - 6.542: 98.0320% ( 26) 00:12:22.975 6.542 - 6.590: 98.0746% ( 5) 00:12:22.975 6.637 - 6.684: 98.1683% ( 11) 00:12:22.975 6.684 - 6.732: 98.2109% ( 5) 00:12:22.975 6.779 - 6.827: 98.2195% ( 1) 00:12:22.975 6.827 - 6.874: 98.3132% ( 11) 00:12:22.975 6.874 - 6.921: 98.6113% ( 35) 00:12:22.975 6.921 - 6.969: 98.8073% ( 23) 00:12:22.975 6.969 - 7.016: 98.9010% ( 11) 00:12:22.975 7.016 - 7.064: 98.9351% ( 4) 00:12:22.975 7.064 - 7.111: 98.9606% ( 3) 00:12:22.975 7.111 - 7.159: 98.9862% ( 3) 00:12:22.975 7.206 - 7.253: 99.0032% ( 2) 00:12:22.975 7.253 - 7.301: 99.0118% ( 1) 00:12:22.975 7.348 - 7.396: 99.0203% ( 1) 00:12:22.975 7.585 - 7.633: 99.0288% ( 1) 00:12:22.975 7.822 - 7.870: 99.0373% ( 1) 00:12:22.975 8.107 - 8.154: 99.0458% ( 1) 00:12:22.975 8.296 - 8.344: 99.0544% ( 1) 00:12:22.975 8.439 - 8.486: 99.0629% ( 1) 00:12:22.975 8.486 - 8.533: 99.0714% ( 1) 00:12:22.975 8.628 - 8.676: 99.0884% ( 2) 00:12:22.975 8.676 - 8.723: 99.0970% ( 1) 00:12:22.975 8.723 - 8.770: 99.1055% ( 1) 00:12:22.975 8.770 - 8.818: 99.1140% ( 1) 00:12:22.975 8.818 - 8.865: 99.1395% ( 3) 00:12:22.975 8.913 - 8.960: 99.1481% ( 1) 00:12:22.975 9.102 - 9.150: 99.1566% ( 1) 00:12:22.975 9.197 - 9.244: 99.1651% ( 1) 00:12:22.975 9.292 - 9.339: 99.1736% ( 1) 00:12:22.975 9.339 - 9.387: 99.1821% ( 1) 00:12:22.975 9.387 - 9.434: 99.1992% ( 2) 00:12:22.975 9.434 - 9.481: 99.2247% ( 3) 00:12:22.975 9.481 - 9.529: 99.2418% ( 2) 00:12:22.975 9.624 - 9.671: 99.2759% ( 4) 00:12:22.975 9.671 - 9.719: 99.3099% ( 4) 00:12:22.975 9.719 - 9.766: 99.3185% ( 1) 00:12:22.975 9.766 - 9.813: 99.3270% ( 1) 00:12:22.975 9.813 - 9.861: 99.3525% ( 3) 00:12:22.975 9.861 - 9.908: 99.3610% ( 1) 00:12:22.975 9.908 - 9.956: 99.3781% ( 2) 00:12:22.975 9.956 - 10.003: 99.3866% ( 1) 00:12:22.975 10.003 - 10.050: 99.4036% ( 2) 00:12:22.975 10.050 - 10.098: 99.4207% ( 2) 00:12:22.975 10.098 - 10.145: 99.4377% ( 2) 00:12:22.975 10.145 - 10.193: 99.4462% ( 1) 00:12:22.975 10.240 - 10.287: 99.4548% ( 1) 00:12:22.975 10.287 - 10.335: 99.4633% ( 1) 00:12:22.975 10.335 - 10.382: 99.4888% ( 3) 00:12:22.975 10.382 - 10.430: 99.4974% ( 1) 00:12:22.975 10.430 - 10.477: 99.5059% ( 1) 00:12:22.975 10.524 - 10.572: 99.5229% ( 2) 00:12:22.975 10.572 - 10.619: 99.5314% ( 1) 00:12:22.975 10.619 - 10.667: 99.5400% ( 1) 00:12:22.975 10.761 - 10.809: 99.5485% ( 1) 00:12:22.975 10.951 - 10.999: 99.5655% ( 2) 00:12:22.975 10.999 - 11.046: 99.5826% ( 2) 00:12:22.975 11.093 - 11.141: 99.5996% ( 2) 00:12:22.975 11.378 - 11.425: 99.6081% ( 1) 00:12:22.975 11.425 - 11.473: 99.6251% ( 2) 00:12:22.975 11.615 - 11.662: 99.6422% ( 2) 00:12:22.975 11.662 - 11.710: 99.6677% ( 3) 00:12:22.975 11.947 - 11.994: 99.6763% ( 1) 00:12:22.975 11.994 - 12.041: 99.6848% ( 1) 00:12:22.975 12.136 - 12.231: 99.7103% ( 3) 00:12:22.975 12.516 - 12.610: 99.7189% ( 1) 00:12:22.975 12.610 - 12.705: 99.7359% ( 2) 00:12:22.975 12.800 - 12.895: 99.7444% ( 1) 00:12:22.975 12.895 - 12.990: 99.7615% ( 2) 00:12:22.975 13.369 - 13.464: 99.7700% ( 1) 00:12:22.975 13.559 - 13.653: 99.7785% ( 1) 00:12:22.975 13.653 - 13.748: 99.7955% ( 2) 00:12:22.975 13.748 - 13.843: 99.8381% ( 5) 00:12:22.975 13.843 - 13.938: 99.8467% ( 1) 00:12:22.975 13.938 - 14.033: 99.8807% ( 4) 00:12:22.975 14.033 - 14.127: 99.8892% ( 1) 00:12:22.975 14.791 - 14.886: 99.8978% ( 1) 00:12:22.975 16.119 - 16.213: 99.9148% ( 2) 00:12:22.975 3980.705 - 4004.978: 99.9404% ( 3) 00:12:22.975 4004.978 - 4029.250: 100.0000% ( 7) 00:12:22.975 00:12:22.975 Complete histogram 00:12:22.975 ================== 00:12:22.975 Range in us Cumulative Count 00:12:22.975 2.643 - 2.655: 0.0256% ( 3) 00:12:22.975 2.655 - 2.667: 4.9242% ( 575) 00:12:22.975 2.667 - 2.679: 42.4689% ( 4407) 00:12:22.975 2.679 - 2.690: 61.9271% ( 2284) 00:12:22.975 2.690 - 2.702: 69.9693% ( 944) 00:12:22.975 2.702 - 2.714: 81.1467% ( 1312) 00:12:22.975 2.714 - 2.726: 89.4957% ( 980) 00:12:22.975 2.726 - 2.738: 93.5849% ( 480) 00:12:22.975 2.738 - 2.7[2024-05-15 00:49:09.714116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:22.975 50: 95.6807% ( 246) 00:12:22.975 2.750 - 2.761: 96.4304% ( 88) 00:12:22.975 2.761 - 2.773: 96.9330% ( 59) 00:12:22.975 2.773 - 2.785: 97.4357% ( 59) 00:12:22.975 2.785 - 2.797: 97.7168% ( 33) 00:12:22.975 2.797 - 2.809: 97.9894% ( 32) 00:12:22.975 2.809 - 2.821: 98.1257% ( 16) 00:12:22.975 2.821 - 2.833: 98.3132% ( 22) 00:12:22.975 2.833 - 2.844: 98.4495% ( 16) 00:12:22.975 2.844 - 2.856: 98.5517% ( 12) 00:12:22.975 2.856 - 2.868: 98.6028% ( 6) 00:12:22.975 2.868 - 2.880: 98.6284% ( 3) 00:12:22.975 2.880 - 2.892: 98.6369% ( 1) 00:12:22.975 2.904 - 2.916: 98.6454% ( 1) 00:12:22.975 2.916 - 2.927: 98.6625% ( 2) 00:12:22.975 2.927 - 2.939: 98.6710% ( 1) 00:12:22.975 2.939 - 2.951: 98.6795% ( 1) 00:12:22.975 2.951 - 2.963: 98.7051% ( 3) 00:12:22.975 2.963 - 2.975: 98.7221% ( 2) 00:12:22.975 2.975 - 2.987: 98.7391% ( 2) 00:12:22.975 2.987 - 2.999: 98.7477% ( 1) 00:12:22.975 3.034 - 3.058: 98.7562% ( 1) 00:12:22.975 3.319 - 3.342: 98.7647% ( 1) 00:12:22.975 3.413 - 3.437: 98.7732% ( 1) 00:12:22.975 3.437 - 3.461: 98.7903% ( 2) 00:12:22.975 3.508 - 3.532: 98.8073% ( 2) 00:12:22.975 3.532 - 3.556: 98.8414% ( 4) 00:12:22.975 3.556 - 3.579: 98.8584% ( 2) 00:12:22.975 3.579 - 3.603: 98.8840% ( 3) 00:12:22.975 3.603 - 3.627: 98.9351% ( 6) 00:12:22.975 3.627 - 3.650: 98.9521% ( 2) 00:12:22.975 3.650 - 3.674: 98.9777% ( 3) 00:12:22.975 3.674 - 3.698: 99.0032% ( 3) 00:12:22.975 3.698 - 3.721: 99.0203% ( 2) 00:12:22.975 3.721 - 3.745: 99.0288% ( 1) 00:12:22.975 3.745 - 3.769: 99.0373% ( 1) 00:12:22.975 3.769 - 3.793: 99.0458% ( 1) 00:12:22.975 3.816 - 3.840: 99.0544% ( 1) 00:12:22.975 3.864 - 3.887: 99.0629% ( 1) 00:12:22.975 4.053 - 4.077: 99.0714% ( 1) 00:12:22.975 4.290 - 4.314: 99.0799% ( 1) 00:12:22.975 4.693 - 4.717: 99.0884% ( 1) 00:12:22.975 5.973 - 5.997: 99.0970% ( 1) 00:12:22.975 6.258 - 6.305: 99.1055% ( 1) 00:12:22.975 6.637 - 6.684: 99.1140% ( 1) 00:12:22.975 6.684 - 6.732: 99.1225% ( 1) 00:12:22.975 6.779 - 6.827: 99.1310% ( 1) 00:12:22.975 6.827 - 6.874: 99.1395% ( 1) 00:12:22.975 6.921 - 6.969: 99.1481% ( 1) 00:12:22.975 7.016 - 7.064: 99.1566% ( 1) 00:12:22.975 7.064 - 7.111: 99.1651% ( 1) 00:12:22.975 7.159 - 7.206: 99.1736% ( 1) 00:12:22.975 7.206 - 7.253: 99.1907% ( 2) 00:12:22.975 7.396 - 7.443: 99.1992% ( 1) 00:12:22.975 7.443 - 7.490: 99.2077% ( 1) 00:12:22.975 7.585 - 7.633: 99.2162% ( 1) 00:12:22.975 7.727 - 7.775: 99.2247% ( 1) 00:12:22.975 7.775 - 7.822: 99.2333% ( 1) 00:12:22.975 7.964 - 8.012: 99.2418% ( 1) 00:12:22.975 8.154 - 8.201: 99.2503% ( 1) 00:12:22.975 8.439 - 8.486: 99.2588% ( 1) 00:12:22.975 8.770 - 8.818: 99.2673% ( 1) 00:12:22.975 10.240 - 10.287: 99.2759% ( 1) 00:12:22.975 12.895 - 12.990: 99.2844% ( 1) 00:12:22.975 13.559 - 13.653: 99.2929% ( 1) 00:12:22.975 14.127 - 14.222: 99.3014% ( 1) 00:12:22.975 3980.705 - 4004.978: 99.6933% ( 46) 00:12:22.975 4004.978 - 4029.250: 99.9915% ( 35) 00:12:22.975 6990.507 - 7039.052: 100.0000% ( 1) 00:12:22.975 00:12:22.975 00:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:22.975 00:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:22.975 00:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:22.975 00:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:22.976 00:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:23.234 [ 00:12:23.234 { 00:12:23.234 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:23.234 "subtype": "Discovery", 00:12:23.234 "listen_addresses": [], 00:12:23.234 "allow_any_host": true, 00:12:23.234 "hosts": [] 00:12:23.234 }, 00:12:23.234 { 00:12:23.234 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:23.234 "subtype": "NVMe", 00:12:23.234 "listen_addresses": [ 00:12:23.234 { 00:12:23.234 "trtype": "VFIOUSER", 00:12:23.234 "adrfam": "IPv4", 00:12:23.234 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:23.234 "trsvcid": "0" 00:12:23.234 } 00:12:23.234 ], 00:12:23.234 "allow_any_host": true, 00:12:23.234 "hosts": [], 00:12:23.234 "serial_number": "SPDK1", 00:12:23.234 "model_number": "SPDK bdev Controller", 00:12:23.234 "max_namespaces": 32, 00:12:23.234 "min_cntlid": 1, 00:12:23.234 "max_cntlid": 65519, 00:12:23.234 "namespaces": [ 00:12:23.234 { 00:12:23.234 "nsid": 1, 00:12:23.234 "bdev_name": "Malloc1", 00:12:23.234 "name": "Malloc1", 00:12:23.234 "nguid": "0C792F67019A4709B1A2A6E4102CC468", 00:12:23.234 "uuid": "0c792f67-019a-4709-b1a2-a6e4102cc468" 00:12:23.234 }, 00:12:23.234 { 00:12:23.234 "nsid": 2, 00:12:23.234 "bdev_name": "Malloc3", 00:12:23.234 "name": "Malloc3", 00:12:23.234 "nguid": "8452109985D14FE68656FADC5275FAD5", 00:12:23.234 "uuid": "84521099-85d1-4fe6-8656-fadc5275fad5" 00:12:23.234 } 00:12:23.234 ] 00:12:23.234 }, 00:12:23.234 { 00:12:23.234 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:23.234 "subtype": "NVMe", 00:12:23.234 "listen_addresses": [ 00:12:23.234 { 00:12:23.234 "trtype": "VFIOUSER", 00:12:23.234 "adrfam": "IPv4", 00:12:23.234 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:23.234 "trsvcid": "0" 00:12:23.234 } 00:12:23.234 ], 00:12:23.234 "allow_any_host": true, 00:12:23.234 "hosts": [], 00:12:23.234 "serial_number": "SPDK2", 00:12:23.234 "model_number": "SPDK bdev Controller", 00:12:23.234 "max_namespaces": 32, 00:12:23.234 "min_cntlid": 1, 00:12:23.234 "max_cntlid": 65519, 00:12:23.234 "namespaces": [ 00:12:23.234 { 00:12:23.234 "nsid": 1, 00:12:23.234 "bdev_name": "Malloc2", 00:12:23.234 "name": "Malloc2", 00:12:23.234 "nguid": "77908B3610304EF99AB609B8DD11D9FD", 00:12:23.234 "uuid": "77908b36-1030-4ef9-9ab6-09b8dd11d9fd" 00:12:23.234 } 00:12:23.234 ] 00:12:23.234 } 00:12:23.234 ] 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3985030 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:23.234 00:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:23.234 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.234 [2024-05-15 00:49:10.243457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.492 Malloc4 00:12:23.492 00:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:23.751 [2024-05-15 00:49:10.700995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:23.751 00:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:23.751 Asynchronous Event Request test 00:12:23.751 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:23.751 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:23.751 Registering asynchronous event callbacks... 00:12:23.751 Starting namespace attribute notice tests for all controllers... 00:12:23.751 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:23.751 aer_cb - Changed Namespace 00:12:23.751 Cleaning up... 00:12:24.011 [ 00:12:24.011 { 00:12:24.011 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:24.011 "subtype": "Discovery", 00:12:24.011 "listen_addresses": [], 00:12:24.011 "allow_any_host": true, 00:12:24.011 "hosts": [] 00:12:24.011 }, 00:12:24.011 { 00:12:24.011 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:24.011 "subtype": "NVMe", 00:12:24.011 "listen_addresses": [ 00:12:24.011 { 00:12:24.011 "trtype": "VFIOUSER", 00:12:24.011 "adrfam": "IPv4", 00:12:24.011 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:24.011 "trsvcid": "0" 00:12:24.011 } 00:12:24.011 ], 00:12:24.011 "allow_any_host": true, 00:12:24.011 "hosts": [], 00:12:24.011 "serial_number": "SPDK1", 00:12:24.011 "model_number": "SPDK bdev Controller", 00:12:24.011 "max_namespaces": 32, 00:12:24.011 "min_cntlid": 1, 00:12:24.011 "max_cntlid": 65519, 00:12:24.011 "namespaces": [ 00:12:24.011 { 00:12:24.011 "nsid": 1, 00:12:24.011 "bdev_name": "Malloc1", 00:12:24.011 "name": "Malloc1", 00:12:24.011 "nguid": "0C792F67019A4709B1A2A6E4102CC468", 00:12:24.011 "uuid": "0c792f67-019a-4709-b1a2-a6e4102cc468" 00:12:24.011 }, 00:12:24.011 { 00:12:24.011 "nsid": 2, 00:12:24.011 "bdev_name": "Malloc3", 00:12:24.011 "name": "Malloc3", 00:12:24.011 "nguid": "8452109985D14FE68656FADC5275FAD5", 00:12:24.011 "uuid": "84521099-85d1-4fe6-8656-fadc5275fad5" 00:12:24.011 } 00:12:24.011 ] 00:12:24.011 }, 00:12:24.011 { 00:12:24.011 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:24.011 "subtype": "NVMe", 00:12:24.011 "listen_addresses": [ 00:12:24.011 { 00:12:24.011 "trtype": "VFIOUSER", 00:12:24.011 "adrfam": "IPv4", 00:12:24.011 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:24.011 "trsvcid": "0" 00:12:24.011 } 00:12:24.011 ], 00:12:24.011 "allow_any_host": true, 00:12:24.011 "hosts": [], 00:12:24.011 "serial_number": "SPDK2", 00:12:24.011 "model_number": "SPDK bdev Controller", 00:12:24.011 "max_namespaces": 32, 00:12:24.011 "min_cntlid": 1, 00:12:24.011 "max_cntlid": 65519, 00:12:24.011 "namespaces": [ 00:12:24.011 { 00:12:24.011 "nsid": 1, 00:12:24.011 "bdev_name": "Malloc2", 00:12:24.011 "name": "Malloc2", 00:12:24.011 "nguid": "77908B3610304EF99AB609B8DD11D9FD", 00:12:24.011 "uuid": "77908b36-1030-4ef9-9ab6-09b8dd11d9fd" 00:12:24.011 }, 00:12:24.011 { 00:12:24.011 "nsid": 2, 00:12:24.011 "bdev_name": "Malloc4", 00:12:24.011 "name": "Malloc4", 00:12:24.011 "nguid": "E547580A30734C04BE0E8C059BB99DDA", 00:12:24.011 "uuid": "e547580a-3073-4c04-be0e-8c059bb99dda" 00:12:24.011 } 00:12:24.011 ] 00:12:24.011 } 00:12:24.011 ] 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3985030 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3980664 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3980664 ']' 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3980664 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3980664 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3980664' 00:12:24.011 killing process with pid 3980664 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3980664 00:12:24.011 [2024-05-15 00:49:11.043147] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:24.011 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3980664 00:12:24.271 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3985140 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3985140' 00:12:24.531 Process pid: 3985140 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3985140 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3985140 ']' 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:24.531 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 [2024-05-15 00:49:11.377083] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:24.531 [2024-05-15 00:49:11.378334] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:12:24.531 [2024-05-15 00:49:11.378402] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.531 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.531 [2024-05-15 00:49:11.438766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.531 [2024-05-15 00:49:11.559788] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.531 [2024-05-15 00:49:11.559847] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.531 [2024-05-15 00:49:11.559863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.531 [2024-05-15 00:49:11.559876] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.531 [2024-05-15 00:49:11.559887] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.531 [2024-05-15 00:49:11.559957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.531 [2024-05-15 00:49:11.560007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.531 [2024-05-15 00:49:11.560091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.531 [2024-05-15 00:49:11.560096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.790 [2024-05-15 00:49:11.656083] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:24.790 [2024-05-15 00:49:11.656290] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:24.790 [2024-05-15 00:49:11.656553] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:24.790 [2024-05-15 00:49:11.657073] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:24.790 [2024-05-15 00:49:11.657337] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:24.790 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:24.790 00:49:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:12:24.790 00:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:25.724 00:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:25.982 00:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:25.982 00:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:25.982 00:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:25.982 00:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:25.982 00:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:26.242 Malloc1 00:12:26.242 00:49:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:26.810 00:49:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:26.810 00:49:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:27.376 [2024-05-15 00:49:14.132784] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:27.376 00:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.376 00:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:27.376 00:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:27.635 Malloc2 00:12:27.635 00:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:27.893 00:49:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:28.151 00:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3985140 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3985140 ']' 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3985140 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3985140 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3985140' 00:12:28.409 killing process with pid 3985140 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3985140 00:12:28.409 [2024-05-15 00:49:15.278879] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:28.409 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3985140 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:28.668 00:12:28.668 real 0m53.319s 00:12:28.668 user 3m30.675s 00:12:28.668 sys 0m4.315s 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:28.668 ************************************ 00:12:28.668 END TEST nvmf_vfio_user 00:12:28.668 ************************************ 00:12:28.668 00:49:15 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:28.668 00:49:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:28.668 00:49:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:28.668 00:49:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.668 ************************************ 00:12:28.668 START TEST nvmf_vfio_user_nvme_compliance 00:12:28.668 ************************************ 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:28.668 * Looking for test storage... 00:12:28.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.668 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3985610 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3985610' 00:12:28.669 Process pid: 3985610 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3985610 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3985610 ']' 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:28.669 00:49:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:28.669 [2024-05-15 00:49:15.718110] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:12:28.669 [2024-05-15 00:49:15.718216] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.928 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.928 [2024-05-15 00:49:15.783337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:28.928 [2024-05-15 00:49:15.904069] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.928 [2024-05-15 00:49:15.904124] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.928 [2024-05-15 00:49:15.904140] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.928 [2024-05-15 00:49:15.904155] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.928 [2024-05-15 00:49:15.904166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.928 [2024-05-15 00:49:15.904262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.928 [2024-05-15 00:49:15.904325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.928 [2024-05-15 00:49:15.904330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.186 00:49:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:29.186 00:49:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:12:29.186 00:49:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.121 malloc0 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.121 [2024-05-15 00:49:17.084234] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.121 00:49:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:30.121 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.379 00:12:30.379 00:12:30.379 CUnit - A unit testing framework for C - Version 2.1-3 00:12:30.379 http://cunit.sourceforge.net/ 00:12:30.379 00:12:30.379 00:12:30.379 Suite: nvme_compliance 00:12:30.379 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 00:49:17.246494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.379 [2024-05-15 00:49:17.248014] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:30.379 [2024-05-15 00:49:17.248041] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:30.379 [2024-05-15 00:49:17.248056] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:30.379 [2024-05-15 00:49:17.252533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.379 passed 00:12:30.379 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 00:49:17.360240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.379 [2024-05-15 00:49:17.363268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.379 passed 00:12:30.637 Test: admin_identify_ns ...[2024-05-15 00:49:17.471928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.637 [2024-05-15 00:49:17.535971] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:30.637 [2024-05-15 00:49:17.543959] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:30.637 [2024-05-15 00:49:17.560105] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.637 passed 00:12:30.637 Test: admin_get_features_mandatory_features ...[2024-05-15 00:49:17.668548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.637 [2024-05-15 00:49:17.673582] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.895 passed 00:12:30.895 Test: admin_get_features_optional_features ...[2024-05-15 00:49:17.777221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.895 [2024-05-15 00:49:17.781248] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.895 passed 00:12:30.895 Test: admin_set_features_number_of_queues ...[2024-05-15 00:49:17.881460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.154 [2024-05-15 00:49:17.987099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.154 passed 00:12:31.154 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 00:49:18.091318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.154 [2024-05-15 00:49:18.094341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.154 passed 00:12:31.154 Test: admin_get_log_page_with_lpo ...[2024-05-15 00:49:18.202926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.412 [2024-05-15 00:49:18.274967] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:31.412 [2024-05-15 00:49:18.288048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.412 passed 00:12:31.412 Test: fabric_property_get ...[2024-05-15 00:49:18.385425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.412 [2024-05-15 00:49:18.386743] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:31.412 [2024-05-15 00:49:18.388449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.412 passed 00:12:31.670 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 00:49:18.495154] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.670 [2024-05-15 00:49:18.496472] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:31.670 [2024-05-15 00:49:18.498178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.670 passed 00:12:31.670 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 00:49:18.599342] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.670 [2024-05-15 00:49:18.686957] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:31.670 [2024-05-15 00:49:18.702953] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:31.670 [2024-05-15 00:49:18.708103] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.928 passed 00:12:31.928 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 00:49:18.808722] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.928 [2024-05-15 00:49:18.810061] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:31.928 [2024-05-15 00:49:18.811754] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.928 passed 00:12:31.928 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 00:49:18.914793] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.186 [2024-05-15 00:49:18.988950] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:32.186 [2024-05-15 00:49:19.012955] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:32.186 [2024-05-15 00:49:19.018113] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.186 passed 00:12:32.186 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 00:49:19.122638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.186 [2024-05-15 00:49:19.123976] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:32.186 [2024-05-15 00:49:19.124018] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:32.186 [2024-05-15 00:49:19.125666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.186 passed 00:12:32.186 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 00:49:19.229010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.445 [2024-05-15 00:49:19.322959] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:32.445 [2024-05-15 00:49:19.330949] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:32.445 [2024-05-15 00:49:19.338947] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:32.445 [2024-05-15 00:49:19.346949] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:32.445 [2024-05-15 00:49:19.376064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.445 passed 00:12:32.445 Test: admin_create_io_sq_verify_pc ...[2024-05-15 00:49:19.482880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.445 [2024-05-15 00:49:19.498963] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:32.702 [2024-05-15 00:49:19.516371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.702 passed 00:12:32.702 Test: admin_create_io_qp_max_qps ...[2024-05-15 00:49:19.616092] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.072 [2024-05-15 00:49:20.737958] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:34.072 [2024-05-15 00:49:21.117118] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.331 passed 00:12:34.331 Test: admin_create_io_sq_shared_cq ...[2024-05-15 00:49:21.221871] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.331 [2024-05-15 00:49:21.351946] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:34.331 [2024-05-15 00:49:21.388037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.589 passed 00:12:34.589 00:12:34.589 Run Summary: Type Total Ran Passed Failed Inactive 00:12:34.589 suites 1 1 n/a 0 0 00:12:34.589 tests 18 18 18 0 0 00:12:34.589 asserts 360 360 360 0 n/a 00:12:34.589 00:12:34.589 Elapsed time = 1.759 seconds 00:12:34.589 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3985610 00:12:34.589 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3985610 ']' 00:12:34.589 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3985610 00:12:34.589 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:12:34.589 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:34.589 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3985610 00:12:34.590 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:34.590 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:34.590 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3985610' 00:12:34.590 killing process with pid 3985610 00:12:34.590 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3985610 00:12:34.590 [2024-05-15 00:49:21.480937] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:34.590 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3985610 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:34.848 00:12:34.848 real 0m6.124s 00:12:34.848 user 0m17.223s 00:12:34.848 sys 0m0.529s 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:34.848 ************************************ 00:12:34.848 END TEST nvmf_vfio_user_nvme_compliance 00:12:34.848 ************************************ 00:12:34.848 00:49:21 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:34.848 00:49:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:34.848 00:49:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:34.848 00:49:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:34.848 ************************************ 00:12:34.848 START TEST nvmf_vfio_user_fuzz 00:12:34.848 ************************************ 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:34.848 * Looking for test storage... 00:12:34.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.848 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3986261 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3986261' 00:12:34.849 Process pid: 3986261 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3986261 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3986261 ']' 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:34.849 00:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:35.414 00:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:35.414 00:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:12:35.414 00:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.346 malloc0 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:36.346 00:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:08.420 Fuzzing completed. Shutting down the fuzz application 00:13:08.420 00:13:08.420 Dumping successful admin opcodes: 00:13:08.420 8, 9, 10, 24, 00:13:08.420 Dumping successful io opcodes: 00:13:08.420 0, 00:13:08.420 NS: 0x200003a1ef00 I/O qp, Total commands completed: 573726, total successful commands: 2212, random_seed: 2885885376 00:13:08.420 NS: 0x200003a1ef00 admin qp, Total commands completed: 124113, total successful commands: 1020, random_seed: 1585897536 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3986261 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3986261 ']' 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3986261 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3986261 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3986261' 00:13:08.420 killing process with pid 3986261 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3986261 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3986261 00:13:08.420 00:49:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:08.420 00:49:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:08.420 00:13:08.420 real 0m32.274s 00:13:08.420 user 0m33.342s 00:13:08.420 sys 0m25.663s 00:13:08.420 00:49:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.420 00:49:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:08.420 ************************************ 00:13:08.420 END TEST nvmf_vfio_user_fuzz 00:13:08.420 ************************************ 00:13:08.420 00:49:54 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:08.420 00:49:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:08.420 00:49:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.420 00:49:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.420 ************************************ 00:13:08.420 START TEST nvmf_host_management 00:13:08.420 ************************************ 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:08.420 * Looking for test storage... 00:13:08.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.420 00:49:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:08.987 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:08.987 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.987 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:08.988 Found net devices under 0000:08:00.0: cvl_0_0 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:08.988 Found net devices under 0000:08:00.1: cvl_0_1 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:13:08.988 00:13:08.988 --- 10.0.0.2 ping statistics --- 00:13:08.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.988 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:13:08.988 00:13:08.988 --- 10.0.0.1 ping statistics --- 00:13:08.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.988 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3990419 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3990419 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3990419 ']' 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:08.988 00:49:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.988 [2024-05-15 00:49:55.941265] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:08.988 [2024-05-15 00:49:55.941363] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.988 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.988 [2024-05-15 00:49:56.007685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.247 [2024-05-15 00:49:56.129206] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.247 [2024-05-15 00:49:56.129266] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.247 [2024-05-15 00:49:56.129282] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.247 [2024-05-15 00:49:56.129295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.247 [2024-05-15 00:49:56.129307] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.247 [2024-05-15 00:49:56.132959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.247 [2024-05-15 00:49:56.133046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.247 [2024-05-15 00:49:56.133128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:09.247 [2024-05-15 00:49:56.133161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.247 [2024-05-15 00:49:56.282697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.247 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.506 Malloc0 00:13:09.506 [2024-05-15 00:49:56.340901] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:09.506 [2024-05-15 00:49:56.341197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3990464 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3990464 /var/tmp/bdevperf.sock 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3990464 ']' 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:09.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:09.506 { 00:13:09.506 "params": { 00:13:09.506 "name": "Nvme$subsystem", 00:13:09.506 "trtype": "$TEST_TRANSPORT", 00:13:09.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:09.506 "adrfam": "ipv4", 00:13:09.506 "trsvcid": "$NVMF_PORT", 00:13:09.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:09.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:09.506 "hdgst": ${hdgst:-false}, 00:13:09.506 "ddgst": ${ddgst:-false} 00:13:09.506 }, 00:13:09.506 "method": "bdev_nvme_attach_controller" 00:13:09.506 } 00:13:09.506 EOF 00:13:09.506 )") 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:09.506 00:49:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:09.506 "params": { 00:13:09.506 "name": "Nvme0", 00:13:09.506 "trtype": "tcp", 00:13:09.506 "traddr": "10.0.0.2", 00:13:09.506 "adrfam": "ipv4", 00:13:09.506 "trsvcid": "4420", 00:13:09.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:09.506 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:09.506 "hdgst": false, 00:13:09.506 "ddgst": false 00:13:09.506 }, 00:13:09.506 "method": "bdev_nvme_attach_controller" 00:13:09.506 }' 00:13:09.506 [2024-05-15 00:49:56.416716] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:09.506 [2024-05-15 00:49:56.416809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3990464 ] 00:13:09.506 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.506 [2024-05-15 00:49:56.477324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.764 [2024-05-15 00:49:56.594751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.023 Running I/O for 10 seconds... 00:13:10.023 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:10.023 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:10.023 00:49:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:10.023 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.023 00:49:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:13:10.023 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:13:10.281 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:13:10.281 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:10.281 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:10.281 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:10.281 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.281 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.281 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.540 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:13:10.540 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:13:10.540 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:10.540 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:10.540 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:10.540 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:10.540 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.540 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.540 [2024-05-15 00:49:57.344647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80bf60 is same with the state(5) to be set 00:13:10.540 [2024-05-15 00:49:57.344816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:10.540 [2024-05-15 00:49:57.344869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.344899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:10.540 [2024-05-15 00:49:57.344915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.344939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:10.540 [2024-05-15 00:49:57.344956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.344982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:10.540 [2024-05-15 00:49:57.345005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.345020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae0d0 is same with the state(5) to be set 00:13:10.540 [2024-05-15 00:49:57.345137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.540 [2024-05-15 00:49:57.345159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.345183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.540 [2024-05-15 00:49:57.345199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.345216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.540 [2024-05-15 00:49:57.345231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.345248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.540 [2024-05-15 00:49:57.345263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.345280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.540 [2024-05-15 00:49:57.345294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.345311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.540 [2024-05-15 00:49:57.345326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.345342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.540 [2024-05-15 00:49:57.345357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.540 [2024-05-15 00:49:57.345374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.540 [2024-05-15 00:49:57.345388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.345971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.345987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.541 [2024-05-15 00:49:57.346737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.541 [2024-05-15 00:49:57.346756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.346774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.346790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.346807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.346822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.346839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.346855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.346872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.346887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.346904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.346920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.346943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.346960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.346977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.346993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.347010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.347025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.347042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.347058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.347075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.347090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.347107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.347122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.347140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.347155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.347176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.347192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.347209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.347225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.347242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:10.542 [2024-05-15 00:49:57.347257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.542 [2024-05-15 00:49:57.347336] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e7230 was disconnected and freed. reset controller. 00:13:10.542 [2024-05-15 00:49:57.348663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:10.542 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.542 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:10.542 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.542 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.542 task offset: 71936 on job bdev=Nvme0n1 fails 00:13:10.542 00:13:10.542 Latency(us) 00:13:10.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.542 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:10.542 Job: Nvme0n1 ended in about 0.42 seconds with error 00:13:10.542 Verification LBA range: start 0x0 length 0x400 00:13:10.542 Nvme0n1 : 0.42 1217.14 76.07 152.14 0.00 45188.29 3034.07 42525.58 00:13:10.542 =================================================================================================================== 00:13:10.542 Total : 1217.14 76.07 152.14 0.00 45188.29 3034.07 42525.58 00:13:10.542 [2024-05-15 00:49:57.351013] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:10.542 [2024-05-15 00:49:57.351045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeae0d0 (9): Bad file descriptor 00:13:10.542 00:49:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.542 00:49:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:10.542 [2024-05-15 00:49:57.485109] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3990464 00:13:11.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3990464) - No such process 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:11.528 { 00:13:11.528 "params": { 00:13:11.528 "name": "Nvme$subsystem", 00:13:11.528 "trtype": "$TEST_TRANSPORT", 00:13:11.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.528 "adrfam": "ipv4", 00:13:11.528 "trsvcid": "$NVMF_PORT", 00:13:11.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.528 "hdgst": ${hdgst:-false}, 00:13:11.528 "ddgst": ${ddgst:-false} 00:13:11.528 }, 00:13:11.528 "method": "bdev_nvme_attach_controller" 00:13:11.528 } 00:13:11.528 EOF 00:13:11.528 )") 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:11.528 00:49:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:11.528 "params": { 00:13:11.528 "name": "Nvme0", 00:13:11.528 "trtype": "tcp", 00:13:11.528 "traddr": "10.0.0.2", 00:13:11.528 "adrfam": "ipv4", 00:13:11.528 "trsvcid": "4420", 00:13:11.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:11.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:11.528 "hdgst": false, 00:13:11.528 "ddgst": false 00:13:11.528 }, 00:13:11.528 "method": "bdev_nvme_attach_controller" 00:13:11.528 }' 00:13:11.528 [2024-05-15 00:49:58.407038] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:11.528 [2024-05-15 00:49:58.407125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3990682 ] 00:13:11.528 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.528 [2024-05-15 00:49:58.468053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.786 [2024-05-15 00:49:58.587928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.044 Running I/O for 1 seconds... 00:13:12.976 00:13:12.976 Latency(us) 00:13:12.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.976 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:12.976 Verification LBA range: start 0x0 length 0x400 00:13:12.976 Nvme0n1 : 1.02 1481.55 92.60 0.00 0.00 42126.63 3155.44 38447.79 00:13:12.976 =================================================================================================================== 00:13:12.976 Total : 1481.55 92.60 0.00 0.00 42126.63 3155.44 38447.79 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.234 rmmod nvme_tcp 00:13:13.234 rmmod nvme_fabrics 00:13:13.234 rmmod nvme_keyring 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3990419 ']' 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3990419 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3990419 ']' 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3990419 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3990419 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3990419' 00:13:13.234 killing process with pid 3990419 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3990419 00:13:13.234 [2024-05-15 00:50:00.261734] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:13.234 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3990419 00:13:13.494 [2024-05-15 00:50:00.464900] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:13.494 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.494 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.494 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.494 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.494 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.494 00:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.494 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.494 00:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.033 00:50:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.033 00:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:16.033 00:13:16.033 real 0m8.429s 00:13:16.033 user 0m20.173s 00:13:16.033 sys 0m2.362s 00:13:16.033 00:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:16.033 00:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.033 ************************************ 00:13:16.033 END TEST nvmf_host_management 00:13:16.033 ************************************ 00:13:16.033 00:50:02 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:16.033 00:50:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:16.033 00:50:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:16.033 00:50:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:16.033 ************************************ 00:13:16.033 START TEST nvmf_lvol 00:13:16.033 ************************************ 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:16.033 * Looking for test storage... 00:13:16.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.033 00:50:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:17.410 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:17.410 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:17.410 Found net devices under 0000:08:00.0: cvl_0_0 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:17.410 Found net devices under 0000:08:00.1: cvl_0_1 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:13:17.410 00:13:17.410 --- 10.0.0.2 ping statistics --- 00:13:17.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.410 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:13:17.410 00:13:17.410 --- 10.0.0.1 ping statistics --- 00:13:17.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.410 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3992294 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3992294 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3992294 ']' 00:13:17.410 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.411 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:17.411 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.411 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:17.411 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:17.411 [2024-05-15 00:50:04.464918] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:17.411 [2024-05-15 00:50:04.465016] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.668 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.668 [2024-05-15 00:50:04.529523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.668 [2024-05-15 00:50:04.645376] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.668 [2024-05-15 00:50:04.645435] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.668 [2024-05-15 00:50:04.645451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.668 [2024-05-15 00:50:04.645464] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.668 [2024-05-15 00:50:04.645476] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.668 [2024-05-15 00:50:04.645554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.668 [2024-05-15 00:50:04.645615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.668 [2024-05-15 00:50:04.645618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.926 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:17.926 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:13:17.926 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.926 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:17.926 00:50:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:17.926 00:50:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.926 00:50:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:18.185 [2024-05-15 00:50:05.050830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.185 00:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.443 00:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:18.443 00:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.701 00:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:18.701 00:50:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:19.267 00:50:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:19.524 00:50:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=884010ce-8030-4508-961b-24e44a85bcb1 00:13:19.524 00:50:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 884010ce-8030-4508-961b-24e44a85bcb1 lvol 20 00:13:19.782 00:50:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=dcce7cc3-b49d-4a42-8099-aeb65bd7bfa4 00:13:19.782 00:50:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:20.040 00:50:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dcce7cc3-b49d-4a42-8099-aeb65bd7bfa4 00:13:20.297 00:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:20.555 [2024-05-15 00:50:07.481418] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:20.555 [2024-05-15 00:50:07.481708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.555 00:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:20.813 00:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3992635 00:13:20.813 00:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:20.813 00:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:20.813 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.745 00:50:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dcce7cc3-b49d-4a42-8099-aeb65bd7bfa4 MY_SNAPSHOT 00:13:22.311 00:50:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=35624c81-a9d3-4bf8-88a2-e32dcfafc21d 00:13:22.311 00:50:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dcce7cc3-b49d-4a42-8099-aeb65bd7bfa4 30 00:13:22.569 00:50:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 35624c81-a9d3-4bf8-88a2-e32dcfafc21d MY_CLONE 00:13:22.827 00:50:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d8335ccd-a695-4b0a-9d1a-04bb6956b4ce 00:13:22.827 00:50:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d8335ccd-a695-4b0a-9d1a-04bb6956b4ce 00:13:23.759 00:50:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3992635 00:13:31.863 Initializing NVMe Controllers 00:13:31.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:31.863 Controller IO queue size 128, less than required. 00:13:31.863 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:31.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:31.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:31.863 Initialization complete. Launching workers. 00:13:31.863 ======================================================== 00:13:31.863 Latency(us) 00:13:31.863 Device Information : IOPS MiB/s Average min max 00:13:31.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9755.50 38.11 13124.21 461.85 100258.37 00:13:31.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9614.00 37.55 13323.19 2067.08 76660.83 00:13:31.863 ======================================================== 00:13:31.863 Total : 19369.50 75.66 13222.97 461.85 100258.37 00:13:31.863 00:13:31.863 00:50:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:31.863 00:50:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dcce7cc3-b49d-4a42-8099-aeb65bd7bfa4 00:13:31.864 00:50:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 884010ce-8030-4508-961b-24e44a85bcb1 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.122 rmmod nvme_tcp 00:13:32.122 rmmod nvme_fabrics 00:13:32.122 rmmod nvme_keyring 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3992294 ']' 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3992294 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3992294 ']' 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3992294 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:32.122 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3992294 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3992294' 00:13:32.381 killing process with pid 3992294 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3992294 00:13:32.381 [2024-05-15 00:50:19.185301] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3992294 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.381 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.641 00:50:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:34.552 00:13:34.552 real 0m18.888s 00:13:34.552 user 1m6.426s 00:13:34.552 sys 0m5.085s 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:34.552 ************************************ 00:13:34.552 END TEST nvmf_lvol 00:13:34.552 ************************************ 00:13:34.552 00:50:21 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:34.552 00:50:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:34.552 00:50:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:34.552 00:50:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:34.552 ************************************ 00:13:34.552 START TEST nvmf_lvs_grow 00:13:34.552 ************************************ 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:34.552 * Looking for test storage... 00:13:34.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.552 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:34.812 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:34.813 00:50:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.716 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:36.717 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:36.717 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:36.717 Found net devices under 0000:08:00.0: cvl_0_0 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:36.717 Found net devices under 0000:08:00.1: cvl_0_1 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:36.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:13:36.717 00:13:36.717 --- 10.0.0.2 ping statistics --- 00:13:36.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.717 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:13:36.717 00:13:36.717 --- 10.0.0.1 ping statistics --- 00:13:36.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.717 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3995223 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3995223 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3995223 ']' 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.717 [2024-05-15 00:50:23.467956] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:36.717 [2024-05-15 00:50:23.468055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.717 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.717 [2024-05-15 00:50:23.534556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.717 [2024-05-15 00:50:23.652543] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.717 [2024-05-15 00:50:23.652607] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.717 [2024-05-15 00:50:23.652623] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.717 [2024-05-15 00:50:23.652635] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.717 [2024-05-15 00:50:23.652647] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.717 [2024-05-15 00:50:23.652684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.717 00:50:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.976 00:50:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.976 00:50:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:37.234 [2024-05-15 00:50:24.050211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:37.234 ************************************ 00:13:37.234 START TEST lvs_grow_clean 00:13:37.234 ************************************ 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:37.234 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:37.492 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:37.492 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:37.751 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:37.751 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:37.751 00:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:38.009 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:38.009 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:38.009 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 lvol 150 00:13:38.268 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=77c10a2c-58de-483f-9577-867e289715c8 00:13:38.268 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:38.268 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:38.833 [2024-05-15 00:50:25.592680] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:38.833 [2024-05-15 00:50:25.592765] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:38.833 true 00:13:38.833 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:38.833 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:39.090 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:39.090 00:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:39.347 00:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 77c10a2c-58de-483f-9577-867e289715c8 00:13:39.604 00:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:39.861 [2024-05-15 00:50:26.760000] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:39.861 [2024-05-15 00:50:26.760311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.861 00:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:40.118 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3995574 00:13:40.119 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:40.119 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3995574 /var/tmp/bdevperf.sock 00:13:40.119 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3995574 ']' 00:13:40.119 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.119 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:40.119 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:40.119 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.119 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:40.119 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:40.119 [2024-05-15 00:50:27.063735] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:40.119 [2024-05-15 00:50:27.063823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995574 ] 00:13:40.119 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.119 [2024-05-15 00:50:27.118077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.376 [2024-05-15 00:50:27.234070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.376 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:40.376 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:13:40.376 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:40.634 Nvme0n1 00:13:40.892 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:40.892 [ 00:13:40.892 { 00:13:40.892 "name": "Nvme0n1", 00:13:40.892 "aliases": [ 00:13:40.892 "77c10a2c-58de-483f-9577-867e289715c8" 00:13:40.892 ], 00:13:40.892 "product_name": "NVMe disk", 00:13:40.892 "block_size": 4096, 00:13:40.892 "num_blocks": 38912, 00:13:40.892 "uuid": "77c10a2c-58de-483f-9577-867e289715c8", 00:13:40.892 "assigned_rate_limits": { 00:13:40.892 "rw_ios_per_sec": 0, 00:13:40.892 "rw_mbytes_per_sec": 0, 00:13:40.892 "r_mbytes_per_sec": 0, 00:13:40.892 "w_mbytes_per_sec": 0 00:13:40.892 }, 00:13:40.892 "claimed": false, 00:13:40.892 "zoned": false, 00:13:40.892 "supported_io_types": { 00:13:40.892 "read": true, 00:13:40.892 "write": true, 00:13:40.892 "unmap": true, 00:13:40.892 "write_zeroes": true, 00:13:40.892 "flush": true, 00:13:40.892 "reset": true, 00:13:40.892 "compare": true, 00:13:40.892 "compare_and_write": true, 00:13:40.892 "abort": true, 00:13:40.892 "nvme_admin": true, 00:13:40.892 "nvme_io": true 00:13:40.892 }, 00:13:40.892 "memory_domains": [ 00:13:40.892 { 00:13:40.892 "dma_device_id": "system", 00:13:40.892 "dma_device_type": 1 00:13:40.892 } 00:13:40.892 ], 00:13:40.892 "driver_specific": { 00:13:40.892 "nvme": [ 00:13:40.892 { 00:13:40.892 "trid": { 00:13:40.892 "trtype": "TCP", 00:13:40.892 "adrfam": "IPv4", 00:13:40.892 "traddr": "10.0.0.2", 00:13:40.892 "trsvcid": "4420", 00:13:40.892 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:40.892 }, 00:13:40.892 "ctrlr_data": { 00:13:40.892 "cntlid": 1, 00:13:40.892 "vendor_id": "0x8086", 00:13:40.892 "model_number": "SPDK bdev Controller", 00:13:40.892 "serial_number": "SPDK0", 00:13:40.892 "firmware_revision": "24.05", 00:13:40.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:40.892 "oacs": { 00:13:40.892 "security": 0, 00:13:40.892 "format": 0, 00:13:40.892 "firmware": 0, 00:13:40.892 "ns_manage": 0 00:13:40.892 }, 00:13:40.892 "multi_ctrlr": true, 00:13:40.892 "ana_reporting": false 00:13:40.892 }, 00:13:40.892 "vs": { 00:13:40.892 "nvme_version": "1.3" 00:13:40.892 }, 00:13:40.892 "ns_data": { 00:13:40.892 "id": 1, 00:13:40.892 "can_share": true 00:13:40.892 } 00:13:40.892 } 00:13:40.892 ], 00:13:40.892 "mp_policy": "active_passive" 00:13:40.892 } 00:13:40.892 } 00:13:40.892 ] 00:13:40.892 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3995675 00:13:40.892 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:40.892 00:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:41.150 Running I/O for 10 seconds... 00:13:42.104 Latency(us) 00:13:42.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.104 Nvme0n1 : 1.00 13646.00 53.30 0.00 0.00 0.00 0.00 0.00 00:13:42.104 =================================================================================================================== 00:13:42.104 Total : 13646.00 53.30 0.00 0.00 0.00 0.00 0.00 00:13:42.104 00:13:43.037 00:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:43.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:43.037 Nvme0n1 : 2.00 13767.50 53.78 0.00 0.00 0.00 0.00 0.00 00:13:43.037 =================================================================================================================== 00:13:43.037 Total : 13767.50 53.78 0.00 0.00 0.00 0.00 0.00 00:13:43.037 00:13:43.295 true 00:13:43.296 00:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:43.296 00:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:43.553 00:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:43.553 00:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:43.553 00:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3995675 00:13:44.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.119 Nvme0n1 : 3.00 13805.00 53.93 0.00 0.00 0.00 0.00 0.00 00:13:44.119 =================================================================================================================== 00:13:44.119 Total : 13805.00 53.93 0.00 0.00 0.00 0.00 0.00 00:13:44.119 00:13:45.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.053 Nvme0n1 : 4.00 13894.00 54.27 0.00 0.00 0.00 0.00 0.00 00:13:45.053 =================================================================================================================== 00:13:45.053 Total : 13894.00 54.27 0.00 0.00 0.00 0.00 0.00 00:13:45.053 00:13:45.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.987 Nvme0n1 : 5.00 13953.60 54.51 0.00 0.00 0.00 0.00 0.00 00:13:45.987 =================================================================================================================== 00:13:45.987 Total : 13953.60 54.51 0.00 0.00 0.00 0.00 0.00 00:13:45.987 00:13:47.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.414 Nvme0n1 : 6.00 13973.00 54.58 0.00 0.00 0.00 0.00 0.00 00:13:47.414 =================================================================================================================== 00:13:47.414 Total : 13973.00 54.58 0.00 0.00 0.00 0.00 0.00 00:13:47.414 00:13:48.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.346 Nvme0n1 : 7.00 14005.57 54.71 0.00 0.00 0.00 0.00 0.00 00:13:48.346 =================================================================================================================== 00:13:48.346 Total : 14005.57 54.71 0.00 0.00 0.00 0.00 0.00 00:13:48.346 00:13:49.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.279 Nvme0n1 : 8.00 14040.75 54.85 0.00 0.00 0.00 0.00 0.00 00:13:49.279 =================================================================================================================== 00:13:49.279 Total : 14040.75 54.85 0.00 0.00 0.00 0.00 0.00 00:13:49.279 00:13:50.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.213 Nvme0n1 : 9.00 14068.33 54.95 0.00 0.00 0.00 0.00 0.00 00:13:50.213 =================================================================================================================== 00:13:50.213 Total : 14068.33 54.95 0.00 0.00 0.00 0.00 0.00 00:13:50.213 00:13:51.146 00:13:51.146 Latency(us) 00:13:51.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.146 Nvme0n1 : 10.00 14079.55 55.00 0.00 0.00 9085.32 5631.24 17282.09 00:13:51.146 =================================================================================================================== 00:13:51.146 Total : 14079.55 55.00 0.00 0.00 9085.32 5631.24 17282.09 00:13:51.146 0 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3995574 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3995574 ']' 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3995574 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3995574 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3995574' 00:13:51.146 killing process with pid 3995574 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3995574 00:13:51.146 Received shutdown signal, test time was about 10.000000 seconds 00:13:51.146 00:13:51.146 Latency(us) 00:13:51.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.146 =================================================================================================================== 00:13:51.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.146 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3995574 00:13:51.404 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:51.661 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:51.919 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:51.919 00:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:52.176 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:52.176 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:52.176 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:52.435 [2024-05-15 00:50:39.454985] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:52.435 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:52.435 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:52.435 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:52.435 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:52.435 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:52.435 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:52.693 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:52.693 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:52.693 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:52.693 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:52.693 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:52.693 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:52.693 request: 00:13:52.693 { 00:13:52.693 "uuid": "772a4916-2b47-4e69-aeb4-8e83b3ff5ba9", 00:13:52.693 "method": "bdev_lvol_get_lvstores", 00:13:52.693 "req_id": 1 00:13:52.693 } 00:13:52.693 Got JSON-RPC error response 00:13:52.693 response: 00:13:52.693 { 00:13:52.693 "code": -19, 00:13:52.693 "message": "No such device" 00:13:52.693 } 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:52.951 aio_bdev 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 77c10a2c-58de-483f-9577-867e289715c8 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=77c10a2c-58de-483f-9577-867e289715c8 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:52.951 00:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:53.209 00:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 77c10a2c-58de-483f-9577-867e289715c8 -t 2000 00:13:53.467 [ 00:13:53.467 { 00:13:53.467 "name": "77c10a2c-58de-483f-9577-867e289715c8", 00:13:53.467 "aliases": [ 00:13:53.467 "lvs/lvol" 00:13:53.467 ], 00:13:53.467 "product_name": "Logical Volume", 00:13:53.467 "block_size": 4096, 00:13:53.467 "num_blocks": 38912, 00:13:53.467 "uuid": "77c10a2c-58de-483f-9577-867e289715c8", 00:13:53.467 "assigned_rate_limits": { 00:13:53.467 "rw_ios_per_sec": 0, 00:13:53.467 "rw_mbytes_per_sec": 0, 00:13:53.467 "r_mbytes_per_sec": 0, 00:13:53.467 "w_mbytes_per_sec": 0 00:13:53.467 }, 00:13:53.467 "claimed": false, 00:13:53.467 "zoned": false, 00:13:53.467 "supported_io_types": { 00:13:53.467 "read": true, 00:13:53.467 "write": true, 00:13:53.467 "unmap": true, 00:13:53.467 "write_zeroes": true, 00:13:53.467 "flush": false, 00:13:53.467 "reset": true, 00:13:53.467 "compare": false, 00:13:53.467 "compare_and_write": false, 00:13:53.467 "abort": false, 00:13:53.467 "nvme_admin": false, 00:13:53.467 "nvme_io": false 00:13:53.467 }, 00:13:53.467 "driver_specific": { 00:13:53.467 "lvol": { 00:13:53.467 "lvol_store_uuid": "772a4916-2b47-4e69-aeb4-8e83b3ff5ba9", 00:13:53.467 "base_bdev": "aio_bdev", 00:13:53.467 "thin_provision": false, 00:13:53.467 "num_allocated_clusters": 38, 00:13:53.467 "snapshot": false, 00:13:53.467 "clone": false, 00:13:53.467 "esnap_clone": false 00:13:53.467 } 00:13:53.467 } 00:13:53.467 } 00:13:53.467 ] 00:13:53.725 00:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:13:53.725 00:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:53.725 00:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:53.982 00:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:53.982 00:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:53.982 00:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:54.239 00:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:54.239 00:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 77c10a2c-58de-483f-9577-867e289715c8 00:13:54.496 00:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 772a4916-2b47-4e69-aeb4-8e83b3ff5ba9 00:13:54.753 00:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:55.011 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.011 00:13:55.011 real 0m17.937s 00:13:55.011 user 0m17.398s 00:13:55.011 sys 0m1.843s 00:13:55.011 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:55.011 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:55.011 ************************************ 00:13:55.011 END TEST lvs_grow_clean 00:13:55.011 ************************************ 00:13:55.011 00:50:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:55.011 00:50:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:55.011 00:50:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:55.011 00:50:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:55.268 ************************************ 00:13:55.268 START TEST lvs_grow_dirty 00:13:55.268 ************************************ 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.268 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:55.526 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:55.526 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:55.784 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:13:55.784 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:13:55.784 00:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:56.043 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:56.043 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:56.043 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b lvol 150 00:13:56.301 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1 00:13:56.301 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:56.301 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:56.558 [2024-05-15 00:50:43.581761] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:56.558 [2024-05-15 00:50:43.581856] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:56.558 true 00:13:56.558 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:13:56.558 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:57.123 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:57.123 00:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:57.380 00:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1 00:13:57.638 00:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:57.896 [2024-05-15 00:50:44.757452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.896 00:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3997247 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3997247 /var/tmp/bdevperf.sock 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3997247 ']' 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:58.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:58.154 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:58.154 [2024-05-15 00:50:45.116799] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:13:58.154 [2024-05-15 00:50:45.116891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997247 ] 00:13:58.154 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.154 [2024-05-15 00:50:45.177288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.412 [2024-05-15 00:50:45.294007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.412 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:58.412 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:13:58.412 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:58.977 Nvme0n1 00:13:58.977 00:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:59.235 [ 00:13:59.235 { 00:13:59.235 "name": "Nvme0n1", 00:13:59.235 "aliases": [ 00:13:59.235 "f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1" 00:13:59.235 ], 00:13:59.235 "product_name": "NVMe disk", 00:13:59.235 "block_size": 4096, 00:13:59.235 "num_blocks": 38912, 00:13:59.235 "uuid": "f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1", 00:13:59.235 "assigned_rate_limits": { 00:13:59.235 "rw_ios_per_sec": 0, 00:13:59.235 "rw_mbytes_per_sec": 0, 00:13:59.235 "r_mbytes_per_sec": 0, 00:13:59.235 "w_mbytes_per_sec": 0 00:13:59.235 }, 00:13:59.235 "claimed": false, 00:13:59.235 "zoned": false, 00:13:59.235 "supported_io_types": { 00:13:59.235 "read": true, 00:13:59.235 "write": true, 00:13:59.235 "unmap": true, 00:13:59.235 "write_zeroes": true, 00:13:59.235 "flush": true, 00:13:59.235 "reset": true, 00:13:59.235 "compare": true, 00:13:59.235 "compare_and_write": true, 00:13:59.235 "abort": true, 00:13:59.235 "nvme_admin": true, 00:13:59.235 "nvme_io": true 00:13:59.235 }, 00:13:59.235 "memory_domains": [ 00:13:59.235 { 00:13:59.235 "dma_device_id": "system", 00:13:59.235 "dma_device_type": 1 00:13:59.235 } 00:13:59.235 ], 00:13:59.235 "driver_specific": { 00:13:59.235 "nvme": [ 00:13:59.235 { 00:13:59.235 "trid": { 00:13:59.235 "trtype": "TCP", 00:13:59.235 "adrfam": "IPv4", 00:13:59.235 "traddr": "10.0.0.2", 00:13:59.235 "trsvcid": "4420", 00:13:59.235 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:59.235 }, 00:13:59.235 "ctrlr_data": { 00:13:59.235 "cntlid": 1, 00:13:59.235 "vendor_id": "0x8086", 00:13:59.235 "model_number": "SPDK bdev Controller", 00:13:59.235 "serial_number": "SPDK0", 00:13:59.235 "firmware_revision": "24.05", 00:13:59.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:59.235 "oacs": { 00:13:59.235 "security": 0, 00:13:59.235 "format": 0, 00:13:59.235 "firmware": 0, 00:13:59.235 "ns_manage": 0 00:13:59.235 }, 00:13:59.235 "multi_ctrlr": true, 00:13:59.235 "ana_reporting": false 00:13:59.235 }, 00:13:59.235 "vs": { 00:13:59.235 "nvme_version": "1.3" 00:13:59.235 }, 00:13:59.235 "ns_data": { 00:13:59.235 "id": 1, 00:13:59.235 "can_share": true 00:13:59.235 } 00:13:59.235 } 00:13:59.235 ], 00:13:59.235 "mp_policy": "active_passive" 00:13:59.235 } 00:13:59.235 } 00:13:59.235 ] 00:13:59.235 00:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3997348 00:13:59.235 00:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:59.235 00:50:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:59.235 Running I/O for 10 seconds... 00:14:00.609 Latency(us) 00:14:00.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.609 Nvme0n1 : 1.00 13654.00 53.34 0.00 0.00 0.00 0.00 0.00 00:14:00.609 =================================================================================================================== 00:14:00.609 Total : 13654.00 53.34 0.00 0.00 0.00 0.00 0.00 00:14:00.609 00:14:01.175 00:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:01.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.432 Nvme0n1 : 2.00 13785.00 53.85 0.00 0.00 0.00 0.00 0.00 00:14:01.432 =================================================================================================================== 00:14:01.432 Total : 13785.00 53.85 0.00 0.00 0.00 0.00 0.00 00:14:01.432 00:14:01.432 true 00:14:01.432 00:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:01.432 00:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:01.690 00:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:01.690 00:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:01.690 00:50:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3997348 00:14:02.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.256 Nvme0n1 : 3.00 13847.67 54.09 0.00 0.00 0.00 0.00 0.00 00:14:02.256 =================================================================================================================== 00:14:02.256 Total : 13847.67 54.09 0.00 0.00 0.00 0.00 0.00 00:14:02.256 00:14:03.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.629 Nvme0n1 : 4.00 13878.25 54.21 0.00 0.00 0.00 0.00 0.00 00:14:03.629 =================================================================================================================== 00:14:03.629 Total : 13878.25 54.21 0.00 0.00 0.00 0.00 0.00 00:14:03.629 00:14:04.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.562 Nvme0n1 : 5.00 13922.00 54.38 0.00 0.00 0.00 0.00 0.00 00:14:04.562 =================================================================================================================== 00:14:04.562 Total : 13922.00 54.38 0.00 0.00 0.00 0.00 0.00 00:14:04.562 00:14:05.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.494 Nvme0n1 : 6.00 13962.00 54.54 0.00 0.00 0.00 0.00 0.00 00:14:05.494 =================================================================================================================== 00:14:05.494 Total : 13962.00 54.54 0.00 0.00 0.00 0.00 0.00 00:14:05.494 00:14:06.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.426 Nvme0n1 : 7.00 13999.43 54.69 0.00 0.00 0.00 0.00 0.00 00:14:06.426 =================================================================================================================== 00:14:06.426 Total : 13999.43 54.69 0.00 0.00 0.00 0.00 0.00 00:14:06.426 00:14:07.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.359 Nvme0n1 : 8.00 14019.62 54.76 0.00 0.00 0.00 0.00 0.00 00:14:07.359 =================================================================================================================== 00:14:07.359 Total : 14019.62 54.76 0.00 0.00 0.00 0.00 0.00 00:14:07.359 00:14:08.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.292 Nvme0n1 : 9.00 14042.56 54.85 0.00 0.00 0.00 0.00 0.00 00:14:08.292 =================================================================================================================== 00:14:08.292 Total : 14042.56 54.85 0.00 0.00 0.00 0.00 0.00 00:14:08.292 00:14:09.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.225 Nvme0n1 : 10.00 14060.70 54.92 0.00 0.00 0.00 0.00 0.00 00:14:09.225 =================================================================================================================== 00:14:09.225 Total : 14060.70 54.92 0.00 0.00 0.00 0.00 0.00 00:14:09.225 00:14:09.225 00:14:09.225 Latency(us) 00:14:09.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.225 Nvme0n1 : 10.00 14055.40 54.90 0.00 0.00 9099.20 5558.42 17087.91 00:14:09.225 =================================================================================================================== 00:14:09.225 Total : 14055.40 54.90 0.00 0.00 9099.20 5558.42 17087.91 00:14:09.225 0 00:14:09.225 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3997247 00:14:09.225 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3997247 ']' 00:14:09.225 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3997247 00:14:09.484 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:14:09.484 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:09.484 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3997247 00:14:09.484 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:09.484 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:09.484 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3997247' 00:14:09.484 killing process with pid 3997247 00:14:09.484 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3997247 00:14:09.484 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.484 00:14:09.484 Latency(us) 00:14:09.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.484 =================================================================================================================== 00:14:09.484 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.484 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3997247 00:14:09.484 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.051 00:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:10.309 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:10.309 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3995223 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3995223 00:14:10.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3995223 Killed "${NVMF_APP[@]}" "$@" 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3998447 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3998447 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3998447 ']' 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:10.568 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:10.568 [2024-05-15 00:50:57.518826] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:10.568 [2024-05-15 00:50:57.518920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.568 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.568 [2024-05-15 00:50:57.585610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.826 [2024-05-15 00:50:57.703302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.826 [2024-05-15 00:50:57.703362] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.826 [2024-05-15 00:50:57.703378] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.826 [2024-05-15 00:50:57.703391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.826 [2024-05-15 00:50:57.703403] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.826 [2024-05-15 00:50:57.703433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.826 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.826 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:10.826 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.826 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.826 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:10.826 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.826 00:50:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:11.084 [2024-05-15 00:50:58.110216] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:11.084 [2024-05-15 00:50:58.110358] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:11.084 [2024-05-15 00:50:58.110418] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:11.084 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:11.084 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1 00:14:11.084 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1 00:14:11.084 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:11.084 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:11.084 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:11.084 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:11.084 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:11.650 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1 -t 2000 00:14:11.650 [ 00:14:11.650 { 00:14:11.650 "name": "f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1", 00:14:11.650 "aliases": [ 00:14:11.650 "lvs/lvol" 00:14:11.650 ], 00:14:11.650 "product_name": "Logical Volume", 00:14:11.650 "block_size": 4096, 00:14:11.650 "num_blocks": 38912, 00:14:11.650 "uuid": "f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1", 00:14:11.650 "assigned_rate_limits": { 00:14:11.650 "rw_ios_per_sec": 0, 00:14:11.650 "rw_mbytes_per_sec": 0, 00:14:11.650 "r_mbytes_per_sec": 0, 00:14:11.650 "w_mbytes_per_sec": 0 00:14:11.650 }, 00:14:11.650 "claimed": false, 00:14:11.650 "zoned": false, 00:14:11.650 "supported_io_types": { 00:14:11.650 "read": true, 00:14:11.650 "write": true, 00:14:11.650 "unmap": true, 00:14:11.650 "write_zeroes": true, 00:14:11.650 "flush": false, 00:14:11.650 "reset": true, 00:14:11.650 "compare": false, 00:14:11.650 "compare_and_write": false, 00:14:11.650 "abort": false, 00:14:11.650 "nvme_admin": false, 00:14:11.650 "nvme_io": false 00:14:11.650 }, 00:14:11.650 "driver_specific": { 00:14:11.650 "lvol": { 00:14:11.650 "lvol_store_uuid": "1f63ea0f-d7ff-4c64-b269-34889e794e9b", 00:14:11.650 "base_bdev": "aio_bdev", 00:14:11.650 "thin_provision": false, 00:14:11.650 "num_allocated_clusters": 38, 00:14:11.650 "snapshot": false, 00:14:11.651 "clone": false, 00:14:11.651 "esnap_clone": false 00:14:11.651 } 00:14:11.651 } 00:14:11.651 } 00:14:11.651 ] 00:14:11.908 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:11.908 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:11.908 00:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:12.165 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:12.165 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:12.165 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:12.423 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:12.423 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:12.681 [2024-05-15 00:50:59.563717] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:12.681 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:12.939 request: 00:14:12.939 { 00:14:12.939 "uuid": "1f63ea0f-d7ff-4c64-b269-34889e794e9b", 00:14:12.939 "method": "bdev_lvol_get_lvstores", 00:14:12.939 "req_id": 1 00:14:12.939 } 00:14:12.939 Got JSON-RPC error response 00:14:12.939 response: 00:14:12.939 { 00:14:12.939 "code": -19, 00:14:12.939 "message": "No such device" 00:14:12.939 } 00:14:12.939 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:12.939 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:12.939 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:12.939 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:12.939 00:50:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:13.197 aio_bdev 00:14:13.197 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1 00:14:13.197 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1 00:14:13.197 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:13.197 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:13.197 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:13.197 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:13.197 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:13.453 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1 -t 2000 00:14:13.710 [ 00:14:13.710 { 00:14:13.710 "name": "f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1", 00:14:13.710 "aliases": [ 00:14:13.710 "lvs/lvol" 00:14:13.710 ], 00:14:13.710 "product_name": "Logical Volume", 00:14:13.710 "block_size": 4096, 00:14:13.710 "num_blocks": 38912, 00:14:13.710 "uuid": "f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1", 00:14:13.710 "assigned_rate_limits": { 00:14:13.710 "rw_ios_per_sec": 0, 00:14:13.710 "rw_mbytes_per_sec": 0, 00:14:13.710 "r_mbytes_per_sec": 0, 00:14:13.710 "w_mbytes_per_sec": 0 00:14:13.710 }, 00:14:13.710 "claimed": false, 00:14:13.710 "zoned": false, 00:14:13.710 "supported_io_types": { 00:14:13.710 "read": true, 00:14:13.710 "write": true, 00:14:13.710 "unmap": true, 00:14:13.710 "write_zeroes": true, 00:14:13.710 "flush": false, 00:14:13.710 "reset": true, 00:14:13.710 "compare": false, 00:14:13.710 "compare_and_write": false, 00:14:13.710 "abort": false, 00:14:13.710 "nvme_admin": false, 00:14:13.710 "nvme_io": false 00:14:13.710 }, 00:14:13.710 "driver_specific": { 00:14:13.710 "lvol": { 00:14:13.710 "lvol_store_uuid": "1f63ea0f-d7ff-4c64-b269-34889e794e9b", 00:14:13.710 "base_bdev": "aio_bdev", 00:14:13.710 "thin_provision": false, 00:14:13.710 "num_allocated_clusters": 38, 00:14:13.710 "snapshot": false, 00:14:13.710 "clone": false, 00:14:13.710 "esnap_clone": false 00:14:13.710 } 00:14:13.710 } 00:14:13.710 } 00:14:13.710 ] 00:14:13.710 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:13.710 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:13.710 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:13.968 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:13.968 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:13.968 00:51:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:14.227 00:51:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:14.227 00:51:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f08cf7da-ff37-4ec7-bcc0-1a23eb435bd1 00:14:14.484 00:51:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f63ea0f-d7ff-4c64-b269-34889e794e9b 00:14:14.742 00:51:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:15.000 00:51:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:15.000 00:14:15.000 real 0m19.748s 00:14:15.000 user 0m50.403s 00:14:15.001 sys 0m4.416s 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:15.001 ************************************ 00:14:15.001 END TEST lvs_grow_dirty 00:14:15.001 ************************************ 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:15.001 nvmf_trace.0 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.001 rmmod nvme_tcp 00:14:15.001 rmmod nvme_fabrics 00:14:15.001 rmmod nvme_keyring 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3998447 ']' 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3998447 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3998447 ']' 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3998447 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3998447 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3998447' 00:14:15.001 killing process with pid 3998447 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3998447 00:14:15.001 00:51:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3998447 00:14:15.259 00:51:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.259 00:51:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.259 00:51:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.259 00:51:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.259 00:51:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.259 00:51:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.259 00:51:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.259 00:51:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.206 00:51:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.206 00:14:17.206 real 0m42.696s 00:14:17.206 user 1m13.542s 00:14:17.206 sys 0m7.898s 00:14:17.206 00:51:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:17.206 00:51:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:17.206 ************************************ 00:14:17.206 END TEST nvmf_lvs_grow 00:14:17.206 ************************************ 00:14:17.464 00:51:04 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:17.464 00:51:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:17.464 00:51:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:17.464 00:51:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:17.464 ************************************ 00:14:17.464 START TEST nvmf_bdev_io_wait 00:14:17.464 ************************************ 00:14:17.464 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:17.464 * Looking for test storage... 00:14:17.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.465 00:51:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:19.371 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:19.371 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:19.371 Found net devices under 0000:08:00.0: cvl_0_0 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:19.371 Found net devices under 0000:08:00.1: cvl_0_1 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.371 00:51:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.371 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.371 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:19.371 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.371 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.371 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.371 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:19.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:14:19.372 00:14:19.372 --- 10.0.0.2 ping statistics --- 00:14:19.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.372 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:14:19.372 00:14:19.372 --- 10.0.0.1 ping statistics --- 00:14:19.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.372 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4000465 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4000465 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 4000465 ']' 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 [2024-05-15 00:51:06.125524] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:19.372 [2024-05-15 00:51:06.125611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.372 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.372 [2024-05-15 00:51:06.192850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.372 [2024-05-15 00:51:06.314813] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.372 [2024-05-15 00:51:06.314871] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.372 [2024-05-15 00:51:06.314887] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.372 [2024-05-15 00:51:06.314901] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.372 [2024-05-15 00:51:06.314913] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.372 [2024-05-15 00:51:06.314996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.372 [2024-05-15 00:51:06.315057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.372 [2024-05-15 00:51:06.315078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.372 [2024-05-15 00:51:06.315082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.372 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.631 [2024-05-15 00:51:06.475068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.631 Malloc0 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.631 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.632 [2024-05-15 00:51:06.538146] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:19.632 [2024-05-15 00:51:06.538420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4000547 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4000548 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4000551 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:19.632 { 00:14:19.632 "params": { 00:14:19.632 "name": "Nvme$subsystem", 00:14:19.632 "trtype": "$TEST_TRANSPORT", 00:14:19.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.632 "adrfam": "ipv4", 00:14:19.632 "trsvcid": "$NVMF_PORT", 00:14:19.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.632 "hdgst": ${hdgst:-false}, 00:14:19.632 "ddgst": ${ddgst:-false} 00:14:19.632 }, 00:14:19.632 "method": "bdev_nvme_attach_controller" 00:14:19.632 } 00:14:19.632 EOF 00:14:19.632 )") 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4000553 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:19.632 { 00:14:19.632 "params": { 00:14:19.632 "name": "Nvme$subsystem", 00:14:19.632 "trtype": "$TEST_TRANSPORT", 00:14:19.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.632 "adrfam": "ipv4", 00:14:19.632 "trsvcid": "$NVMF_PORT", 00:14:19.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.632 "hdgst": ${hdgst:-false}, 00:14:19.632 "ddgst": ${ddgst:-false} 00:14:19.632 }, 00:14:19.632 "method": "bdev_nvme_attach_controller" 00:14:19.632 } 00:14:19.632 EOF 00:14:19.632 )") 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:19.632 { 00:14:19.632 "params": { 00:14:19.632 "name": "Nvme$subsystem", 00:14:19.632 "trtype": "$TEST_TRANSPORT", 00:14:19.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.632 "adrfam": "ipv4", 00:14:19.632 "trsvcid": "$NVMF_PORT", 00:14:19.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.632 "hdgst": ${hdgst:-false}, 00:14:19.632 "ddgst": ${ddgst:-false} 00:14:19.632 }, 00:14:19.632 "method": "bdev_nvme_attach_controller" 00:14:19.632 } 00:14:19.632 EOF 00:14:19.632 )") 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:19.632 { 00:14:19.632 "params": { 00:14:19.632 "name": "Nvme$subsystem", 00:14:19.632 "trtype": "$TEST_TRANSPORT", 00:14:19.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.632 "adrfam": "ipv4", 00:14:19.632 "trsvcid": "$NVMF_PORT", 00:14:19.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.632 "hdgst": ${hdgst:-false}, 00:14:19.632 "ddgst": ${ddgst:-false} 00:14:19.632 }, 00:14:19.632 "method": "bdev_nvme_attach_controller" 00:14:19.632 } 00:14:19.632 EOF 00:14:19.632 )") 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4000547 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:19.632 "params": { 00:14:19.632 "name": "Nvme1", 00:14:19.632 "trtype": "tcp", 00:14:19.632 "traddr": "10.0.0.2", 00:14:19.632 "adrfam": "ipv4", 00:14:19.632 "trsvcid": "4420", 00:14:19.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.632 "hdgst": false, 00:14:19.632 "ddgst": false 00:14:19.632 }, 00:14:19.632 "method": "bdev_nvme_attach_controller" 00:14:19.632 }' 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:19.632 "params": { 00:14:19.632 "name": "Nvme1", 00:14:19.632 "trtype": "tcp", 00:14:19.632 "traddr": "10.0.0.2", 00:14:19.632 "adrfam": "ipv4", 00:14:19.632 "trsvcid": "4420", 00:14:19.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.632 "hdgst": false, 00:14:19.632 "ddgst": false 00:14:19.632 }, 00:14:19.632 "method": "bdev_nvme_attach_controller" 00:14:19.632 }' 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:19.632 "params": { 00:14:19.632 "name": "Nvme1", 00:14:19.632 "trtype": "tcp", 00:14:19.632 "traddr": "10.0.0.2", 00:14:19.632 "adrfam": "ipv4", 00:14:19.632 "trsvcid": "4420", 00:14:19.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.632 "hdgst": false, 00:14:19.632 "ddgst": false 00:14:19.632 }, 00:14:19.632 "method": "bdev_nvme_attach_controller" 00:14:19.632 }' 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:19.632 00:51:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:19.632 "params": { 00:14:19.632 "name": "Nvme1", 00:14:19.632 "trtype": "tcp", 00:14:19.632 "traddr": "10.0.0.2", 00:14:19.632 "adrfam": "ipv4", 00:14:19.632 "trsvcid": "4420", 00:14:19.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.632 "hdgst": false, 00:14:19.632 "ddgst": false 00:14:19.632 }, 00:14:19.632 "method": "bdev_nvme_attach_controller" 00:14:19.632 }' 00:14:19.632 [2024-05-15 00:51:06.588481] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:19.632 [2024-05-15 00:51:06.588481] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:19.632 [2024-05-15 00:51:06.588481] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:19.632 [2024-05-15 00:51:06.588580] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 00:51:06.588580] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 00:51:06.588580] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:19.632 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:19.632 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:19.632 [2024-05-15 00:51:06.590079] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:19.632 [2024-05-15 00:51:06.590160] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:19.632 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.890 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.890 [2024-05-15 00:51:06.731723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.890 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.890 [2024-05-15 00:51:06.802008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.890 [2024-05-15 00:51:06.827654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:19.890 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.890 [2024-05-15 00:51:06.871057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.890 [2024-05-15 00:51:06.898997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:19.890 [2024-05-15 00:51:06.934740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.148 [2024-05-15 00:51:06.967726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:20.148 [2024-05-15 00:51:07.031290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:20.148 Running I/O for 1 seconds... 00:14:20.148 Running I/O for 1 seconds... 00:14:20.406 Running I/O for 1 seconds... 00:14:20.406 Running I/O for 1 seconds... 00:14:21.341 00:14:21.341 Latency(us) 00:14:21.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.341 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:21.342 Nvme1n1 : 1.01 10279.47 40.15 0.00 0.00 12397.41 7233.23 21651.15 00:14:21.342 =================================================================================================================== 00:14:21.342 Total : 10279.47 40.15 0.00 0.00 12397.41 7233.23 21651.15 00:14:21.342 00:14:21.342 Latency(us) 00:14:21.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.342 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:21.342 Nvme1n1 : 1.00 99825.91 389.94 0.00 0.00 1276.87 503.66 2014.63 00:14:21.342 =================================================================================================================== 00:14:21.342 Total : 99825.91 389.94 0.00 0.00 1276.87 503.66 2014.63 00:14:21.342 00:14:21.342 Latency(us) 00:14:21.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.342 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:21.342 Nvme1n1 : 1.01 7265.63 28.38 0.00 0.00 17523.54 8980.86 28544.57 00:14:21.342 =================================================================================================================== 00:14:21.342 Total : 7265.63 28.38 0.00 0.00 17523.54 8980.86 28544.57 00:14:21.342 00:14:21.342 Latency(us) 00:14:21.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.342 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:21.342 Nvme1n1 : 1.01 7959.12 31.09 0.00 0.00 15990.53 9126.49 25049.32 00:14:21.342 =================================================================================================================== 00:14:21.342 Total : 7959.12 31.09 0.00 0.00 15990.53 9126.49 25049.32 00:14:21.342 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4000548 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4000551 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4000553 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.601 rmmod nvme_tcp 00:14:21.601 rmmod nvme_fabrics 00:14:21.601 rmmod nvme_keyring 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4000465 ']' 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4000465 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 4000465 ']' 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 4000465 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4000465 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4000465' 00:14:21.601 killing process with pid 4000465 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 4000465 00:14:21.601 [2024-05-15 00:51:08.640479] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:21.601 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 4000465 00:14:21.861 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.861 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.861 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.861 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.861 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.861 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.861 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.861 00:51:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.398 00:51:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.398 00:14:24.398 real 0m6.591s 00:14:24.398 user 0m15.086s 00:14:24.398 sys 0m3.419s 00:14:24.398 00:51:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:24.398 00:51:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:24.398 ************************************ 00:14:24.398 END TEST nvmf_bdev_io_wait 00:14:24.398 ************************************ 00:14:24.398 00:51:10 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:24.398 00:51:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:24.398 00:51:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:24.398 00:51:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.398 ************************************ 00:14:24.398 START TEST nvmf_queue_depth 00:14:24.398 ************************************ 00:14:24.398 00:51:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:24.398 * Looking for test storage... 00:14:24.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.398 00:51:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.399 00:51:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:25.782 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:25.782 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:25.782 Found net devices under 0000:08:00.0: cvl_0_0 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:25.782 Found net devices under 0000:08:00.1: cvl_0_1 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:14:25.782 00:14:25.782 --- 10.0.0.2 ping statistics --- 00:14:25.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.782 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:14:25.782 00:14:25.782 --- 10.0.0.1 ping statistics --- 00:14:25.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.782 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:25.782 00:51:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:25.783 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4002782 00:14:25.783 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:25.783 00:51:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4002782 00:14:25.783 00:51:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 4002782 ']' 00:14:25.783 00:51:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.783 00:51:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:25.783 00:51:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.783 00:51:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:25.783 00:51:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:25.783 [2024-05-15 00:51:12.819290] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:25.783 [2024-05-15 00:51:12.819389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.041 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.041 [2024-05-15 00:51:12.883744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.041 [2024-05-15 00:51:13.002529] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.041 [2024-05-15 00:51:13.002594] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.041 [2024-05-15 00:51:13.002610] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.041 [2024-05-15 00:51:13.002624] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.041 [2024-05-15 00:51:13.002635] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.041 [2024-05-15 00:51:13.002674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:26.300 [2024-05-15 00:51:13.142234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:26.300 Malloc0 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:26.300 [2024-05-15 00:51:13.195867] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:26.300 [2024-05-15 00:51:13.196133] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4002805 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4002805 /var/tmp/bdevperf.sock 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 4002805 ']' 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:26.300 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:26.300 [2024-05-15 00:51:13.246004] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:26.300 [2024-05-15 00:51:13.246096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002805 ] 00:14:26.300 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.300 [2024-05-15 00:51:13.306507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.560 [2024-05-15 00:51:13.423548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.560 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:26.560 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:14:26.560 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:26.560 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.560 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:26.818 NVMe0n1 00:14:26.818 00:51:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.818 00:51:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:26.818 Running I/O for 10 seconds... 00:14:39.016 00:14:39.016 Latency(us) 00:14:39.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.016 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:39.016 Verification LBA range: start 0x0 length 0x4000 00:14:39.016 NVMe0n1 : 10.10 7801.55 30.47 0.00 0.00 130642.79 28738.75 82721.00 00:14:39.016 =================================================================================================================== 00:14:39.016 Total : 7801.55 30.47 0.00 0.00 130642.79 28738.75 82721.00 00:14:39.016 0 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4002805 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 4002805 ']' 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 4002805 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4002805 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4002805' 00:14:39.016 killing process with pid 4002805 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 4002805 00:14:39.016 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.016 00:14:39.016 Latency(us) 00:14:39.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.016 =================================================================================================================== 00:14:39.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.016 00:51:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 4002805 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.016 rmmod nvme_tcp 00:14:39.016 rmmod nvme_fabrics 00:14:39.016 rmmod nvme_keyring 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4002782 ']' 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4002782 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 4002782 ']' 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 4002782 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4002782 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4002782' 00:14:39.016 killing process with pid 4002782 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 4002782 00:14:39.016 [2024-05-15 00:51:24.224940] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 4002782 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.016 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.017 00:51:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.017 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.017 00:51:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.584 00:51:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:39.584 00:14:39.584 real 0m15.548s 00:14:39.584 user 0m22.467s 00:14:39.584 sys 0m2.658s 00:14:39.584 00:51:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:39.584 00:51:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.584 ************************************ 00:14:39.584 END TEST nvmf_queue_depth 00:14:39.584 ************************************ 00:14:39.584 00:51:26 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:39.584 00:51:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:39.584 00:51:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:39.584 00:51:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:39.584 ************************************ 00:14:39.584 START TEST nvmf_target_multipath 00:14:39.584 ************************************ 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:39.584 * Looking for test storage... 00:14:39.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.584 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:39.585 00:51:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:41.489 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:41.489 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:41.489 Found net devices under 0000:08:00.0: cvl_0_0 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:41.489 Found net devices under 0000:08:00.1: cvl_0_1 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.489 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:14:41.490 00:14:41.490 --- 10.0.0.2 ping statistics --- 00:14:41.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.490 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:14:41.490 00:14:41.490 --- 10.0.0.1 ping statistics --- 00:14:41.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.490 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:41.490 only one NIC for nvmf test 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.490 rmmod nvme_tcp 00:14:41.490 rmmod nvme_fabrics 00:14:41.490 rmmod nvme_keyring 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.490 00:51:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:44.027 00:14:44.027 real 0m3.932s 00:14:44.027 user 0m0.645s 00:14:44.027 sys 0m1.275s 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.027 00:51:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:44.027 ************************************ 00:14:44.027 END TEST nvmf_target_multipath 00:14:44.027 ************************************ 00:14:44.027 00:51:30 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:44.027 00:51:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:44.027 00:51:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.027 00:51:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:44.027 ************************************ 00:14:44.027 START TEST nvmf_zcopy 00:14:44.027 ************************************ 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:44.027 * Looking for test storage... 00:14:44.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:44.027 00:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:45.404 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:45.404 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:45.404 Found net devices under 0000:08:00.0: cvl_0_0 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.404 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:45.405 Found net devices under 0000:08:00.1: cvl_0_1 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:45.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:14:45.405 00:14:45.405 --- 10.0.0.2 ping statistics --- 00:14:45.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.405 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:14:45.405 00:14:45.405 --- 10.0.0.1 ping statistics --- 00:14:45.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.405 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4006699 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4006699 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 4006699 ']' 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:45.405 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.405 [2024-05-15 00:51:32.452238] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:45.405 [2024-05-15 00:51:32.452341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.663 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.663 [2024-05-15 00:51:32.517397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.663 [2024-05-15 00:51:32.635746] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.663 [2024-05-15 00:51:32.635809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.663 [2024-05-15 00:51:32.635825] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.663 [2024-05-15 00:51:32.635839] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.663 [2024-05-15 00:51:32.635851] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.663 [2024-05-15 00:51:32.635888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.921 [2024-05-15 00:51:32.762406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.921 [2024-05-15 00:51:32.778327] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:45.921 [2024-05-15 00:51:32.778565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.921 malloc0 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:45.921 { 00:14:45.921 "params": { 00:14:45.921 "name": "Nvme$subsystem", 00:14:45.921 "trtype": "$TEST_TRANSPORT", 00:14:45.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:45.921 "adrfam": "ipv4", 00:14:45.921 "trsvcid": "$NVMF_PORT", 00:14:45.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:45.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:45.921 "hdgst": ${hdgst:-false}, 00:14:45.921 "ddgst": ${ddgst:-false} 00:14:45.921 }, 00:14:45.921 "method": "bdev_nvme_attach_controller" 00:14:45.921 } 00:14:45.921 EOF 00:14:45.921 )") 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:45.921 00:51:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:45.921 "params": { 00:14:45.921 "name": "Nvme1", 00:14:45.921 "trtype": "tcp", 00:14:45.921 "traddr": "10.0.0.2", 00:14:45.921 "adrfam": "ipv4", 00:14:45.921 "trsvcid": "4420", 00:14:45.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.921 "hdgst": false, 00:14:45.921 "ddgst": false 00:14:45.921 }, 00:14:45.921 "method": "bdev_nvme_attach_controller" 00:14:45.921 }' 00:14:45.921 [2024-05-15 00:51:32.856926] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:45.921 [2024-05-15 00:51:32.857025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006816 ] 00:14:45.921 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.921 [2024-05-15 00:51:32.915824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.179 [2024-05-15 00:51:33.035139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.437 Running I/O for 10 seconds... 00:14:56.408 00:14:56.408 Latency(us) 00:14:56.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.408 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:56.408 Verification LBA range: start 0x0 length 0x1000 00:14:56.408 Nvme1n1 : 10.01 5419.59 42.34 0.00 0.00 23543.53 655.36 33204.91 00:14:56.408 =================================================================================================================== 00:14:56.408 Total : 5419.59 42.34 0.00 0.00 23543.53 655.36 33204.91 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4007723 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:56.666 { 00:14:56.666 "params": { 00:14:56.666 "name": "Nvme$subsystem", 00:14:56.666 "trtype": "$TEST_TRANSPORT", 00:14:56.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:56.666 "adrfam": "ipv4", 00:14:56.666 "trsvcid": "$NVMF_PORT", 00:14:56.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:56.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:56.666 "hdgst": ${hdgst:-false}, 00:14:56.666 "ddgst": ${ddgst:-false} 00:14:56.666 }, 00:14:56.666 "method": "bdev_nvme_attach_controller" 00:14:56.666 } 00:14:56.666 EOF 00:14:56.666 )") 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:56.666 [2024-05-15 00:51:43.512870] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.512923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:56.666 00:51:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:56.666 "params": { 00:14:56.666 "name": "Nvme1", 00:14:56.666 "trtype": "tcp", 00:14:56.666 "traddr": "10.0.0.2", 00:14:56.666 "adrfam": "ipv4", 00:14:56.666 "trsvcid": "4420", 00:14:56.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.666 "hdgst": false, 00:14:56.666 "ddgst": false 00:14:56.666 }, 00:14:56.666 "method": "bdev_nvme_attach_controller" 00:14:56.666 }' 00:14:56.666 [2024-05-15 00:51:43.520825] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.520850] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.528844] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.528868] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.536866] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.536889] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.544886] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.544909] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.552909] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.552938] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.553462] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:14:56.666 [2024-05-15 00:51:43.553549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007723 ] 00:14:56.666 [2024-05-15 00:51:43.560942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.560967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.568958] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.568982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.576982] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.577005] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.666 [2024-05-15 00:51:43.585005] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.585034] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.593030] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.593063] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.601054] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.601077] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.609083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.609106] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.614076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.666 [2024-05-15 00:51:43.617137] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.617174] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.625191] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.625241] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.633170] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.633202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.641174] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.641199] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.649201] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.649231] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.657214] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.657250] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.665234] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.665262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.673269] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.673299] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.681339] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.666 [2024-05-15 00:51:43.681388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.666 [2024-05-15 00:51:43.689346] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.667 [2024-05-15 00:51:43.689389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.667 [2024-05-15 00:51:43.697319] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.667 [2024-05-15 00:51:43.697345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.667 [2024-05-15 00:51:43.705358] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.667 [2024-05-15 00:51:43.705389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.667 [2024-05-15 00:51:43.713371] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.667 [2024-05-15 00:51:43.713400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.667 [2024-05-15 00:51:43.721414] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.667 [2024-05-15 00:51:43.721456] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.729414] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.729444] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.733019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.924 [2024-05-15 00:51:43.737434] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.737460] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.745492] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.745534] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.753544] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.753596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.761563] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.761612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.769585] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.769634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.777620] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.777677] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.785634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.785686] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.793626] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.793664] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.801679] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.801741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.809695] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.809746] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.924 [2024-05-15 00:51:43.817665] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.924 [2024-05-15 00:51:43.817688] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.825688] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.825717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.833697] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.833721] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.841747] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.841778] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.849757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.849788] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.857777] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.857802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.865800] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.865826] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.873829] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.873860] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.881849] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.881875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.889868] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.889892] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.897885] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.897909] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.905917] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.905947] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.913945] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.913983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.921962] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.921986] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.929991] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.930030] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.938015] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.938038] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.946034] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.946057] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.954047] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.954081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.962084] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.962108] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.970107] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.970133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.925 [2024-05-15 00:51:43.978126] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.925 [2024-05-15 00:51:43.978152] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:43.986169] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:43.986197] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:43.994178] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:43.994202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.002199] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.002229] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.010221] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.010248] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.018240] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.018264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.026283] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.026310] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 Running I/O for 5 seconds... 00:14:57.183 [2024-05-15 00:51:44.034293] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.034319] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.047995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.048028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.059826] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.059859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.073738] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.073770] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.086369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.086400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.099639] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.099671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.183 [2024-05-15 00:51:44.112348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.183 [2024-05-15 00:51:44.112379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.184 [2024-05-15 00:51:44.125096] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.184 [2024-05-15 00:51:44.125127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.184 [2024-05-15 00:51:44.138351] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.184 [2024-05-15 00:51:44.138383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.184 [2024-05-15 00:51:44.151115] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.184 [2024-05-15 00:51:44.151155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.184 [2024-05-15 00:51:44.164033] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.184 [2024-05-15 00:51:44.164063] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.184 [2024-05-15 00:51:44.176872] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.184 [2024-05-15 00:51:44.176902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.184 [2024-05-15 00:51:44.190027] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.184 [2024-05-15 00:51:44.190058] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.184 [2024-05-15 00:51:44.203014] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.184 [2024-05-15 00:51:44.203047] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.184 [2024-05-15 00:51:44.215528] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.184 [2024-05-15 00:51:44.215559] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.184 [2024-05-15 00:51:44.228207] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.184 [2024-05-15 00:51:44.228238] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.241469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.241500] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.254357] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.254388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.267399] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.267431] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.279892] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.279930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.292786] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.292824] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.305503] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.305534] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.319219] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.319250] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.332155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.332186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.344866] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.344896] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.357362] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.357392] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.370079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.370110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.383131] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.383169] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.396761] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.396802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.409886] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.442 [2024-05-15 00:51:44.409917] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.442 [2024-05-15 00:51:44.422919] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.443 [2024-05-15 00:51:44.422957] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.443 [2024-05-15 00:51:44.435421] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.443 [2024-05-15 00:51:44.435451] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.443 [2024-05-15 00:51:44.448190] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.443 [2024-05-15 00:51:44.448224] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.443 [2024-05-15 00:51:44.460889] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.443 [2024-05-15 00:51:44.460919] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.443 [2024-05-15 00:51:44.473151] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.443 [2024-05-15 00:51:44.473182] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.443 [2024-05-15 00:51:44.486296] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.443 [2024-05-15 00:51:44.486326] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.443 [2024-05-15 00:51:44.499358] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.443 [2024-05-15 00:51:44.499390] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.512600] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.512632] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.525535] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.525567] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.538599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.538629] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.552004] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.552034] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.565396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.565434] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.578887] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.578918] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.592040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.592071] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.605106] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.605137] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.618137] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.618167] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.630924] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.630965] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.643809] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.643840] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.656257] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.656288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.669212] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.669243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.682447] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.682485] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.695164] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.695194] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.707910] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.707948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.720757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.720803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.733497] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.733529] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.701 [2024-05-15 00:51:44.746888] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.701 [2024-05-15 00:51:44.746919] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.760379] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.760410] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.773855] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.773886] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.787081] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.787112] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.800096] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.800129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.813205] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.813240] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.826187] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.826217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.838924] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.838963] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.851800] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.851831] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.864617] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.864647] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.877835] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.877865] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.891028] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.891058] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.904140] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.904171] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.916747] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.916777] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.958 [2024-05-15 00:51:44.929494] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.958 [2024-05-15 00:51:44.929524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.959 [2024-05-15 00:51:44.942367] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.959 [2024-05-15 00:51:44.942397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.959 [2024-05-15 00:51:44.955749] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.959 [2024-05-15 00:51:44.955780] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.959 [2024-05-15 00:51:44.969066] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.959 [2024-05-15 00:51:44.969096] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.959 [2024-05-15 00:51:44.981983] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.959 [2024-05-15 00:51:44.982013] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.959 [2024-05-15 00:51:44.995362] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.959 [2024-05-15 00:51:44.995395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.959 [2024-05-15 00:51:45.008207] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.959 [2024-05-15 00:51:45.008237] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.021673] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.021705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.034445] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.034475] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.047438] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.047472] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.060662] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.060693] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.073356] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.073401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.086324] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.086355] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.098848] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.098893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.111636] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.111683] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.124624] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.124662] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.137459] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.137488] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.150676] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.150709] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.163767] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.163810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.177363] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.177393] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.190430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.190467] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.203030] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.203067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.215503] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.215541] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.228421] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.228452] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.241376] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.241414] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.254564] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.254594] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.267653] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.267685] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.280577] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.280623] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.248 [2024-05-15 00:51:45.293868] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.248 [2024-05-15 00:51:45.293899] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.306904] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.306944] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.319695] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.319726] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.332453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.332484] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.345485] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.345529] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.358192] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.358223] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.371430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.371476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.384394] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.384424] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.397175] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.397220] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.410093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.410123] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.422626] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.422679] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.435606] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.435645] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.448767] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.448811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.461814] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.461844] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.474558] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.474588] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.487299] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.487329] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.499884] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.499925] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.512804] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.512849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.525642] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.525673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.538604] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.538642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.506 [2024-05-15 00:51:45.551532] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.506 [2024-05-15 00:51:45.551562] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.564476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.564506] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.577533] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.577571] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.591226] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.591256] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.604725] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.604756] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.617566] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.617605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.629990] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.630020] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.642582] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.642612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.655384] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.655414] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.668315] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.668346] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.680795] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.680827] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.693794] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.693828] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.705880] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.705911] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.718960] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.718991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.731765] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.731795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.744707] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.744737] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.757255] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.757285] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.770174] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.770204] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.783241] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.783271] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.796525] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.796579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.764 [2024-05-15 00:51:45.809687] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.764 [2024-05-15 00:51:45.809717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.822986] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.823017] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.836024] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.836054] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.848552] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.848582] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.861388] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.861427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.874920] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.874962] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.888369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.888400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.901033] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.901064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.913773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.913806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.926839] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.926870] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.939434] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.939473] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.952196] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.952226] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.965517] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.965548] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.978553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.978588] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:45.991623] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:45.991653] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:46.004375] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:46.004406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:46.017318] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:46.017349] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:46.030345] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:46.030384] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.022 [2024-05-15 00:51:46.043382] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.022 [2024-05-15 00:51:46.043420] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.023 [2024-05-15 00:51:46.056634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.023 [2024-05-15 00:51:46.056665] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.023 [2024-05-15 00:51:46.069367] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.023 [2024-05-15 00:51:46.069406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.082722] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.082758] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.095422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.095454] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.108082] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.108122] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.121284] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.121316] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.133778] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.133810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.147210] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.147243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.160380] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.160413] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.173346] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.173377] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.187037] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.187068] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.200418] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.200450] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.213565] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.213596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.226679] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.226711] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.240303] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.240334] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.253407] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.253440] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.266314] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.266345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.279425] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.279458] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.292606] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.292637] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.305553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.305584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.318887] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.318919] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.281 [2024-05-15 00:51:46.331863] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.281 [2024-05-15 00:51:46.331894] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.345370] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.345402] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.358674] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.358714] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.371993] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.372024] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.384910] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.384949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.397468] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.397500] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.409979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.410010] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.423033] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.423072] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.436024] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.436055] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.448642] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.448673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.461258] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.461288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.474395] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.474426] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.487648] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.487678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.500527] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.500557] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.513618] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.513656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.526053] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.526086] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.539048] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.539078] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.551743] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.551773] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.539 [2024-05-15 00:51:46.564574] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.539 [2024-05-15 00:51:46.564605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.540 [2024-05-15 00:51:46.577247] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.540 [2024-05-15 00:51:46.577294] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.540 [2024-05-15 00:51:46.590060] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.540 [2024-05-15 00:51:46.590098] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.603123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.603176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.615878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.615908] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.628928] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.629195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.641878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.641909] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.654781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.654811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.667526] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.667557] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.680447] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.680477] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.693506] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.693536] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.706457] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.706494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.719432] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.719463] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.732308] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.732338] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.745000] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.745030] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.757753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.757791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.770532] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.770563] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.783032] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.783070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.795787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.795818] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.808709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.808739] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.821784] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.821814] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.834626] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.834656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.799 [2024-05-15 00:51:46.847782] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.799 [2024-05-15 00:51:46.847813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.058 [2024-05-15 00:51:46.861181] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.861213] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.873809] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.873840] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.886751] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.886781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.899226] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.899257] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.912049] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.912079] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.924920] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.924962] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.937939] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.937969] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.951144] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.951175] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.964589] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.964629] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.977433] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.977463] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:46.990328] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:46.990360] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:47.003390] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:47.003421] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:47.016601] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:47.016631] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:47.029504] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:47.029535] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:47.042669] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:47.042699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:47.055714] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:47.055744] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:47.068580] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:47.068611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:47.081606] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:47.081638] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:47.094992] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:47.095026] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.059 [2024-05-15 00:51:47.108257] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.059 [2024-05-15 00:51:47.108287] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.121457] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.121493] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.134325] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.134356] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.147015] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.147046] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.159860] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.159897] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.172681] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.172720] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.185285] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.185315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.198415] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.198453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.211381] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.211415] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.224121] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.224155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.236602] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.236655] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.249037] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.249068] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.261614] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.261644] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.274741] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.274771] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.287722] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.287760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.300845] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.300886] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.313757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.313787] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.326377] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.326427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.339487] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.339517] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.352616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.352647] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.317 [2024-05-15 00:51:47.365876] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.317 [2024-05-15 00:51:47.365913] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.378650] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.378680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.392376] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.392406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.405788] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.405822] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.418703] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.418734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.431859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.431891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.445265] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.445297] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.458197] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.458229] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.470784] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.470815] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.483618] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.483650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.496476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.496507] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.509314] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.509345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.522717] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.522749] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.535632] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.535663] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.548384] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.548416] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.561059] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.561090] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.573984] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.576 [2024-05-15 00:51:47.574023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.576 [2024-05-15 00:51:47.586773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.577 [2024-05-15 00:51:47.586805] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.577 [2024-05-15 00:51:47.599471] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.577 [2024-05-15 00:51:47.599502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.577 [2024-05-15 00:51:47.611958] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.577 [2024-05-15 00:51:47.611988] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.577 [2024-05-15 00:51:47.624584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.577 [2024-05-15 00:51:47.624615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.637560] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.637593] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.650719] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.650750] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.664603] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.664634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.677674] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.677706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.690772] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.690803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.703814] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.703846] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.717111] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.717142] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.730479] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.730510] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.743995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.744026] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.756885] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.756916] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.769944] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.769974] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.782791] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.782823] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.795715] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.795746] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.808507] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.808537] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.821387] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.821426] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.834358] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.834389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.846771] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.846802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.859577] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.859608] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.872397] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.872427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.836 [2024-05-15 00:51:47.885510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.836 [2024-05-15 00:51:47.885540] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:47.898110] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:47.898141] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:47.910765] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:47.910811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:47.923628] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:47.923658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:47.936476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:47.936521] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:47.949191] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:47.949222] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:47.961692] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:47.961723] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:47.974381] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:47.974411] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:47.986968] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:47.986998] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:47.999390] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:47.999428] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.012151] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.012185] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.024857] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.024897] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.037675] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.037706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.050571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.050602] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.063781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.063823] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.077080] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.077110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.089821] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.089852] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.102461] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.102496] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.115236] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.115266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.127980] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.128019] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.095 [2024-05-15 00:51:48.140704] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.095 [2024-05-15 00:51:48.140737] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.153955] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.153995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.167052] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.167083] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.180401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.180431] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.193431] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.193467] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.206259] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.206289] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.219309] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.219341] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.232251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.232289] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.244807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.244837] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.257840] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.257878] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.270414] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.270453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.282977] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.283022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.295552] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.295596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.308216] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.308256] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.321105] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.321142] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.333548] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.333578] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.346655] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.346685] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.359326] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.359371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.372553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.372592] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.385699] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.385729] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.354 [2024-05-15 00:51:48.398771] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.354 [2024-05-15 00:51:48.398801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.613 [2024-05-15 00:51:48.411995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.613 [2024-05-15 00:51:48.412028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.613 [2024-05-15 00:51:48.425011] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.613 [2024-05-15 00:51:48.425042] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.613 [2024-05-15 00:51:48.438284] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.613 [2024-05-15 00:51:48.438315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.613 [2024-05-15 00:51:48.450989] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.613 [2024-05-15 00:51:48.451019] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.613 [2024-05-15 00:51:48.463491] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.613 [2024-05-15 00:51:48.463529] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.613 [2024-05-15 00:51:48.476447] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.613 [2024-05-15 00:51:48.476477] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.613 [2024-05-15 00:51:48.489378] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.613 [2024-05-15 00:51:48.489408] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.613 [2024-05-15 00:51:48.502621] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.613 [2024-05-15 00:51:48.502651] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.515369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.515412] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.528338] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.528368] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.541566] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.541597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.554965] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.554995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.567994] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.568030] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.580927] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.580968] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.593626] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.593657] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.606596] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.606643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.619178] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.619209] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.632054] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.632092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.645158] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.645188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.614 [2024-05-15 00:51:48.657826] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.614 [2024-05-15 00:51:48.657856] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.671019] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.671059] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.684227] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.684264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.697243] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.697274] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.710011] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.710050] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.723039] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.723071] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.736140] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.736170] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.749364] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.749401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.762738] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.762769] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.775616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.775647] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.788432] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.788463] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.801744] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.801775] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.814328] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.814358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.827508] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.827538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.840653] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.840683] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.853495] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.853529] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.865988] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.866018] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.878631] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.878661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.891516] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.891546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.904239] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.904269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.917278] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.917308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.930060] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.930090] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.894 [2024-05-15 00:51:48.943759] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.894 [2024-05-15 00:51:48.943799] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.188 [2024-05-15 00:51:48.959317] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.188 [2024-05-15 00:51:48.959352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.188 [2024-05-15 00:51:48.972414] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.188 [2024-05-15 00:51:48.972445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:48.984910] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:48.984949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:48.997418] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:48.997457] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.010351] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.010382] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.023097] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.023127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.035773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.035803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.048330] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.048359] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 00:15:02.189 Latency(us) 00:15:02.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.189 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:02.189 Nvme1n1 : 5.01 9808.56 76.63 0.00 0.00 13029.82 6116.69 24660.95 00:15:02.189 =================================================================================================================== 00:15:02.189 Total : 9808.56 76.63 0.00 0.00 13029.82 6116.69 24660.95 00:15:02.189 [2024-05-15 00:51:49.059066] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.059099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.067079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.067107] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.075092] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.075120] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.083192] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.083255] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.091210] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.091268] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.099225] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.099280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.107255] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.107311] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.115278] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.115336] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.123313] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.123374] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.131321] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.131378] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.139348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.139403] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.147361] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.147411] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.155315] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.155339] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.163350] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.163380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.171368] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.171409] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.179383] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.179409] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.187482] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.187541] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.195502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.195558] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.203487] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.203529] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.211464] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.211488] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.189 [2024-05-15 00:51:49.219523] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.189 [2024-05-15 00:51:49.219572] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.447 [2024-05-15 00:51:49.227536] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.447 [2024-05-15 00:51:49.227570] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.447 [2024-05-15 00:51:49.235537] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.447 [2024-05-15 00:51:49.235564] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.447 [2024-05-15 00:51:49.243625] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.447 [2024-05-15 00:51:49.243681] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.447 [2024-05-15 00:51:49.251646] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.447 [2024-05-15 00:51:49.251697] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.447 [2024-05-15 00:51:49.259597] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.447 [2024-05-15 00:51:49.259621] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.447 [2024-05-15 00:51:49.267620] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.447 [2024-05-15 00:51:49.267644] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.447 [2024-05-15 00:51:49.275649] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.447 [2024-05-15 00:51:49.275674] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4007723) - No such process 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4007723 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:02.447 delay0 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.447 00:51:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:02.447 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.447 [2024-05-15 00:51:49.395142] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:10.556 Initializing NVMe Controllers 00:15:10.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:10.556 Initialization complete. Launching workers. 00:15:10.556 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2627 00:15:10.556 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2914, failed to submit 33 00:15:10.556 success 2738, unsuccess 176, failed 0 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.556 rmmod nvme_tcp 00:15:10.556 rmmod nvme_fabrics 00:15:10.556 rmmod nvme_keyring 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4006699 ']' 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4006699 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 4006699 ']' 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 4006699 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4006699 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4006699' 00:15:10.556 killing process with pid 4006699 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 4006699 00:15:10.556 [2024-05-15 00:51:56.244613] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 4006699 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.556 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.557 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.557 00:51:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.557 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.557 00:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.495 00:51:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:11.495 00:15:11.495 real 0m27.956s 00:15:11.495 user 0m41.307s 00:15:11.495 sys 0m8.129s 00:15:11.495 00:51:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:11.495 00:51:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.495 ************************************ 00:15:11.495 END TEST nvmf_zcopy 00:15:11.495 ************************************ 00:15:11.495 00:51:58 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:11.495 00:51:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:11.495 00:51:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:11.496 00:51:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.755 ************************************ 00:15:11.755 START TEST nvmf_nmic 00:15:11.755 ************************************ 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:11.755 * Looking for test storage... 00:15:11.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:11.755 00:51:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:13.665 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:13.665 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:13.665 Found net devices under 0000:08:00.0: cvl_0_0 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:13.665 Found net devices under 0000:08:00.1: cvl_0_1 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.665 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:13.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:15:13.665 00:15:13.665 --- 10.0.0.2 ping statistics --- 00:15:13.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.666 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:15:13.666 00:15:13.666 --- 10.0.0.1 ping statistics --- 00:15:13.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.666 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4010413 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4010413 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 4010413 ']' 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:13.666 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.666 [2024-05-15 00:52:00.452780] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:15:13.666 [2024-05-15 00:52:00.452882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.666 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.666 [2024-05-15 00:52:00.519578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.666 [2024-05-15 00:52:00.640814] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.666 [2024-05-15 00:52:00.640875] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.666 [2024-05-15 00:52:00.640891] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.666 [2024-05-15 00:52:00.640904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.666 [2024-05-15 00:52:00.640916] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.666 [2024-05-15 00:52:00.640999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.666 [2024-05-15 00:52:00.641058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.666 [2024-05-15 00:52:00.641110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.666 [2024-05-15 00:52:00.641145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.924 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:13.924 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 [2024-05-15 00:52:00.789636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 Malloc0 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 [2024-05-15 00:52:00.839682] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:13.925 [2024-05-15 00:52:00.839959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:13.925 test case1: single bdev can't be used in multiple subsystems 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 [2024-05-15 00:52:00.863798] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:13.925 [2024-05-15 00:52:00.863832] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:13.925 [2024-05-15 00:52:00.863849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.925 request: 00:15:13.925 { 00:15:13.925 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:13.925 "namespace": { 00:15:13.925 "bdev_name": "Malloc0", 00:15:13.925 "no_auto_visible": false 00:15:13.925 }, 00:15:13.925 "method": "nvmf_subsystem_add_ns", 00:15:13.925 "req_id": 1 00:15:13.925 } 00:15:13.925 Got JSON-RPC error response 00:15:13.925 response: 00:15:13.925 { 00:15:13.925 "code": -32602, 00:15:13.925 "message": "Invalid parameters" 00:15:13.925 } 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:13.925 Adding namespace failed - expected result. 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:13.925 test case2: host connect to nvmf target in multiple paths 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 [2024-05-15 00:52:00.871919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.925 00:52:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:14.491 00:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:14.748 00:52:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:14.748 00:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:15:14.748 00:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.748 00:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:14.748 00:52:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:15:17.275 00:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:17.275 00:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:17.275 00:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.275 00:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:17.275 00:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.275 00:52:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:15:17.275 00:52:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:17.275 [global] 00:15:17.275 thread=1 00:15:17.275 invalidate=1 00:15:17.275 rw=write 00:15:17.275 time_based=1 00:15:17.275 runtime=1 00:15:17.275 ioengine=libaio 00:15:17.275 direct=1 00:15:17.275 bs=4096 00:15:17.275 iodepth=1 00:15:17.275 norandommap=0 00:15:17.275 numjobs=1 00:15:17.275 00:15:17.275 verify_dump=1 00:15:17.275 verify_backlog=512 00:15:17.275 verify_state_save=0 00:15:17.275 do_verify=1 00:15:17.275 verify=crc32c-intel 00:15:17.275 [job0] 00:15:17.275 filename=/dev/nvme0n1 00:15:17.275 Could not set queue depth (nvme0n1) 00:15:17.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.275 fio-3.35 00:15:17.275 Starting 1 thread 00:15:18.213 00:15:18.213 job0: (groupid=0, jobs=1): err= 0: pid=4010815: Wed May 15 00:52:05 2024 00:15:18.213 read: IOPS=1393, BW=5574KiB/s (5708kB/s)(5580KiB/1001msec) 00:15:18.213 slat (nsec): min=6175, max=54090, avg=12996.42, stdev=4403.39 00:15:18.213 clat (usec): min=311, max=522, avg=365.90, stdev=35.34 00:15:18.213 lat (usec): min=318, max=538, avg=378.89, stdev=37.08 00:15:18.213 clat percentiles (usec): 00:15:18.213 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:15:18.213 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 367], 00:15:18.213 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 461], 00:15:18.213 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 519], 99.95th=[ 523], 00:15:18.213 | 99.99th=[ 523] 00:15:18.213 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:18.213 slat (nsec): min=8309, max=62632, avg=19314.35, stdev=7100.31 00:15:18.213 clat (usec): min=204, max=459, avg=279.20, stdev=39.67 00:15:18.213 lat (usec): min=215, max=502, avg=298.52, stdev=43.91 00:15:18.213 clat percentiles (usec): 00:15:18.213 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 241], 00:15:18.213 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 293], 00:15:18.213 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 338], 00:15:18.213 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 424], 99.95th=[ 461], 00:15:18.213 | 99.99th=[ 461] 00:15:18.213 bw ( KiB/s): min= 7576, max= 7576, per=100.00%, avg=7576.00, stdev= 0.00, samples=1 00:15:18.213 iops : min= 1894, max= 1894, avg=1894.00, stdev= 0.00, samples=1 00:15:18.213 lat (usec) : 250=14.33%, 500=84.99%, 750=0.68% 00:15:18.213 cpu : usr=4.00%, sys=6.30%, ctx=2933, majf=0, minf=1 00:15:18.213 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.213 issued rwts: total=1395,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.213 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.213 00:15:18.213 Run status group 0 (all jobs): 00:15:18.213 READ: bw=5574KiB/s (5708kB/s), 5574KiB/s-5574KiB/s (5708kB/s-5708kB/s), io=5580KiB (5714kB), run=1001-1001msec 00:15:18.213 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:15:18.213 00:15:18.213 Disk stats (read/write): 00:15:18.213 nvme0n1: ios=1152/1536, merge=0/0, ticks=1392/424, in_queue=1816, util=98.80% 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:18.213 rmmod nvme_tcp 00:15:18.213 rmmod nvme_fabrics 00:15:18.213 rmmod nvme_keyring 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4010413 ']' 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4010413 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 4010413 ']' 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 4010413 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4010413 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4010413' 00:15:18.213 killing process with pid 4010413 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 4010413 00:15:18.213 [2024-05-15 00:52:05.264670] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:18.213 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 4010413 00:15:18.472 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.472 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.472 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.472 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.472 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.472 00:52:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.472 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.472 00:52:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.008 00:52:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.008 00:15:21.008 real 0m8.984s 00:15:21.008 user 0m19.696s 00:15:21.008 sys 0m2.140s 00:15:21.008 00:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:21.008 00:52:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:21.008 ************************************ 00:15:21.008 END TEST nvmf_nmic 00:15:21.008 ************************************ 00:15:21.008 00:52:07 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:21.008 00:52:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:21.008 00:52:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:21.008 00:52:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:21.008 ************************************ 00:15:21.008 START TEST nvmf_fio_target 00:15:21.008 ************************************ 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:21.008 * Looking for test storage... 00:15:21.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.008 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.009 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.009 00:52:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.009 00:52:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:22.386 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.386 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:22.387 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:22.387 Found net devices under 0000:08:00.0: cvl_0_0 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:22.387 Found net devices under 0000:08:00.1: cvl_0_1 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:15:22.387 00:15:22.387 --- 10.0.0.2 ping statistics --- 00:15:22.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.387 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:15:22.387 00:15:22.387 --- 10.0.0.1 ping statistics --- 00:15:22.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.387 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4012416 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4012416 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 4012416 ']' 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:22.387 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.646 [2024-05-15 00:52:09.488526] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:15:22.646 [2024-05-15 00:52:09.488614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.646 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.646 [2024-05-15 00:52:09.553020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.646 [2024-05-15 00:52:09.670071] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.646 [2024-05-15 00:52:09.670131] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.646 [2024-05-15 00:52:09.670147] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.646 [2024-05-15 00:52:09.670161] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.646 [2024-05-15 00:52:09.670173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.646 [2024-05-15 00:52:09.670262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.646 [2024-05-15 00:52:09.670318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.646 [2024-05-15 00:52:09.670370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.646 [2024-05-15 00:52:09.670373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.904 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:22.904 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:15:22.904 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.904 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:22.904 00:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.904 00:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.904 00:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:23.162 [2024-05-15 00:52:10.087532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.162 00:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:23.420 00:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:23.420 00:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:23.678 00:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:23.678 00:52:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:24.242 00:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:24.242 00:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:24.242 00:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:24.242 00:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:24.500 00:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:24.758 00:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:24.758 00:52:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:25.016 00:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:25.016 00:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:25.274 00:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:25.274 00:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:25.531 00:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:25.789 00:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:25.789 00:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:26.046 00:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:26.046 00:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:26.304 00:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.560 [2024-05-15 00:52:13.463918] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:26.560 [2024-05-15 00:52:13.464242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.560 00:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:26.817 00:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:27.075 00:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:27.641 00:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:27.641 00:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:15:27.641 00:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:27.641 00:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:15:27.641 00:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:15:27.641 00:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:15:29.537 00:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:29.537 00:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:29.537 00:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.537 00:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:15:29.537 00:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.537 00:52:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:15:29.537 00:52:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:29.537 [global] 00:15:29.537 thread=1 00:15:29.537 invalidate=1 00:15:29.537 rw=write 00:15:29.537 time_based=1 00:15:29.537 runtime=1 00:15:29.537 ioengine=libaio 00:15:29.537 direct=1 00:15:29.537 bs=4096 00:15:29.537 iodepth=1 00:15:29.537 norandommap=0 00:15:29.537 numjobs=1 00:15:29.537 00:15:29.537 verify_dump=1 00:15:29.537 verify_backlog=512 00:15:29.537 verify_state_save=0 00:15:29.537 do_verify=1 00:15:29.537 verify=crc32c-intel 00:15:29.537 [job0] 00:15:29.537 filename=/dev/nvme0n1 00:15:29.537 [job1] 00:15:29.537 filename=/dev/nvme0n2 00:15:29.537 [job2] 00:15:29.537 filename=/dev/nvme0n3 00:15:29.537 [job3] 00:15:29.537 filename=/dev/nvme0n4 00:15:29.537 Could not set queue depth (nvme0n1) 00:15:29.537 Could not set queue depth (nvme0n2) 00:15:29.537 Could not set queue depth (nvme0n3) 00:15:29.537 Could not set queue depth (nvme0n4) 00:15:29.795 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.795 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.795 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.795 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.795 fio-3.35 00:15:29.795 Starting 4 threads 00:15:31.168 00:15:31.168 job0: (groupid=0, jobs=1): err= 0: pid=4013155: Wed May 15 00:52:17 2024 00:15:31.168 read: IOPS=412, BW=1650KiB/s (1690kB/s)(1652KiB/1001msec) 00:15:31.168 slat (nsec): min=10386, max=59285, avg=16537.10, stdev=5404.49 00:15:31.168 clat (usec): min=362, max=42369, avg=2026.35, stdev=7893.30 00:15:31.168 lat (usec): min=373, max=42381, avg=2042.88, stdev=7894.36 00:15:31.168 clat percentiles (usec): 00:15:31.168 | 1.00th=[ 371], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 392], 00:15:31.168 | 30.00th=[ 400], 40.00th=[ 404], 50.00th=[ 408], 60.00th=[ 412], 00:15:31.168 | 70.00th=[ 416], 80.00th=[ 424], 90.00th=[ 437], 95.00th=[ 474], 00:15:31.168 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:31.168 | 99.99th=[42206] 00:15:31.168 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:15:31.168 slat (nsec): min=6876, max=43332, avg=18147.97, stdev=7260.84 00:15:31.168 clat (usec): min=205, max=1086, avg=276.68, stdev=63.53 00:15:31.168 lat (usec): min=219, max=1099, avg=294.83, stdev=65.41 00:15:31.168 clat percentiles (usec): 00:15:31.168 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:15:31.168 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 273], 00:15:31.168 | 70.00th=[ 293], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 363], 00:15:31.168 | 99.00th=[ 465], 99.50th=[ 537], 99.90th=[ 1090], 99.95th=[ 1090], 00:15:31.168 | 99.99th=[ 1090] 00:15:31.168 bw ( KiB/s): min= 4096, max= 4096, per=24.08%, avg=4096.00, stdev= 0.00, samples=1 00:15:31.168 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:31.168 lat (usec) : 250=23.14%, 500=74.38%, 750=0.54% 00:15:31.168 lat (msec) : 2=0.11%, 20=0.11%, 50=1.73% 00:15:31.168 cpu : usr=1.00%, sys=1.60%, ctx=926, majf=0, minf=1 00:15:31.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.168 issued rwts: total=413,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.168 job1: (groupid=0, jobs=1): err= 0: pid=4013156: Wed May 15 00:52:17 2024 00:15:31.168 read: IOPS=1017, BW=4071KiB/s (4169kB/s)(4100KiB/1007msec) 00:15:31.168 slat (nsec): min=5134, max=29341, avg=9325.08, stdev=2920.12 00:15:31.168 clat (usec): min=285, max=42100, avg=436.82, stdev=1829.58 00:15:31.168 lat (usec): min=293, max=42107, avg=446.14, stdev=1829.75 00:15:31.168 clat percentiles (usec): 00:15:31.168 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:15:31.168 | 30.00th=[ 326], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:15:31.168 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 453], 95.00th=[ 486], 00:15:31.168 | 99.00th=[ 603], 99.50th=[ 816], 99.90th=[41157], 99.95th=[42206], 00:15:31.168 | 99.99th=[42206] 00:15:31.168 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:15:31.168 slat (nsec): min=6704, max=45730, avg=16860.70, stdev=7930.34 00:15:31.168 clat (usec): min=213, max=746, avg=334.72, stdev=91.55 00:15:31.168 lat (usec): min=222, max=770, avg=351.58, stdev=95.84 00:15:31.168 clat percentiles (usec): 00:15:31.168 | 1.00th=[ 251], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:15:31.168 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:15:31.168 | 70.00th=[ 343], 80.00th=[ 404], 90.00th=[ 498], 95.00th=[ 537], 00:15:31.168 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 709], 99.95th=[ 750], 00:15:31.168 | 99.99th=[ 750] 00:15:31.168 bw ( KiB/s): min= 4952, max= 7336, per=36.11%, avg=6144.00, stdev=1685.74, samples=2 00:15:31.168 iops : min= 1238, max= 1834, avg=1536.00, stdev=421.44, samples=2 00:15:31.168 lat (usec) : 250=0.62%, 500=91.84%, 750=7.30%, 1000=0.04% 00:15:31.168 lat (msec) : 2=0.12%, 50=0.08% 00:15:31.168 cpu : usr=1.79%, sys=3.98%, ctx=2562, majf=0, minf=1 00:15:31.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.168 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.168 job2: (groupid=0, jobs=1): err= 0: pid=4013157: Wed May 15 00:52:17 2024 00:15:31.168 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:15:31.168 slat (nsec): min=9476, max=38116, avg=17126.31, stdev=3054.08 00:15:31.168 clat (usec): min=332, max=41058, avg=1363.59, stdev=6200.29 00:15:31.168 lat (usec): min=348, max=41074, avg=1380.72, stdev=6201.00 00:15:31.168 clat percentiles (usec): 00:15:31.168 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:15:31.168 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 375], 00:15:31.168 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 412], 95.00th=[ 437], 00:15:31.168 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:31.168 | 99.99th=[41157] 00:15:31.168 write: IOPS=698, BW=2793KiB/s (2860kB/s)(2796KiB/1001msec); 0 zone resets 00:15:31.168 slat (nsec): min=8903, max=66923, avg=22254.79, stdev=7261.71 00:15:31.168 clat (usec): min=217, max=1068, avg=386.04, stdev=135.74 00:15:31.168 lat (usec): min=229, max=1089, avg=408.30, stdev=136.68 00:15:31.168 clat percentiles (usec): 00:15:31.168 | 1.00th=[ 223], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 258], 00:15:31.168 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 343], 60.00th=[ 453], 00:15:31.168 | 70.00th=[ 482], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 578], 00:15:31.168 | 99.00th=[ 807], 99.50th=[ 938], 99.90th=[ 1074], 99.95th=[ 1074], 00:15:31.168 | 99.99th=[ 1074] 00:15:31.168 bw ( KiB/s): min= 4096, max= 4096, per=24.08%, avg=4096.00, stdev= 0.00, samples=1 00:15:31.168 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:31.168 lat (usec) : 250=6.94%, 500=78.36%, 750=12.88%, 1000=0.66% 00:15:31.168 lat (msec) : 2=0.08%, 50=1.07% 00:15:31.168 cpu : usr=2.50%, sys=2.50%, ctx=1212, majf=0, minf=1 00:15:31.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.168 issued rwts: total=512,699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.168 job3: (groupid=0, jobs=1): err= 0: pid=4013158: Wed May 15 00:52:17 2024 00:15:31.168 read: IOPS=1032, BW=4132KiB/s (4231kB/s)(4136KiB/1001msec) 00:15:31.168 slat (nsec): min=6730, max=41282, avg=12701.36, stdev=5247.82 00:15:31.168 clat (usec): min=412, max=9258, avg=472.00, stdev=275.84 00:15:31.168 lat (usec): min=420, max=9290, avg=484.70, stdev=276.74 00:15:31.168 clat percentiles (usec): 00:15:31.168 | 1.00th=[ 420], 5.00th=[ 424], 10.00th=[ 429], 20.00th=[ 437], 00:15:31.168 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 474], 00:15:31.169 | 70.00th=[ 478], 80.00th=[ 482], 90.00th=[ 490], 95.00th=[ 502], 00:15:31.169 | 99.00th=[ 529], 99.50th=[ 594], 99.90th=[ 988], 99.95th=[ 9241], 00:15:31.169 | 99.99th=[ 9241] 00:15:31.169 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:31.169 slat (usec): min=8, max=1497, avg=19.17, stdev=38.57 00:15:31.169 clat (usec): min=225, max=1031, avg=297.74, stdev=49.83 00:15:31.169 lat (usec): min=237, max=1805, avg=316.91, stdev=63.81 00:15:31.169 clat percentiles (usec): 00:15:31.169 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 260], 00:15:31.169 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:15:31.169 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 359], 00:15:31.169 | 99.00th=[ 433], 99.50th=[ 490], 99.90th=[ 979], 99.95th=[ 1029], 00:15:31.169 | 99.99th=[ 1029] 00:15:31.169 bw ( KiB/s): min= 6176, max= 6176, per=36.30%, avg=6176.00, stdev= 0.00, samples=1 00:15:31.169 iops : min= 1544, max= 1544, avg=1544.00, stdev= 0.00, samples=1 00:15:31.169 lat (usec) : 250=5.41%, 500=92.30%, 750=1.98%, 1000=0.23% 00:15:31.169 lat (msec) : 2=0.04%, 10=0.04% 00:15:31.169 cpu : usr=3.00%, sys=5.50%, ctx=2574, majf=0, minf=1 00:15:31.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.169 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.169 00:15:31.169 Run status group 0 (all jobs): 00:15:31.169 READ: bw=11.6MiB/s (12.1MB/s), 1650KiB/s-4132KiB/s (1690kB/s-4231kB/s), io=11.7MiB (12.2MB), run=1001-1007msec 00:15:31.169 WRITE: bw=16.6MiB/s (17.4MB/s), 2046KiB/s-6138KiB/s (2095kB/s-6285kB/s), io=16.7MiB (17.5MB), run=1001-1007msec 00:15:31.169 00:15:31.169 Disk stats (read/write): 00:15:31.169 nvme0n1: ios=85/512, merge=0/0, ticks=1554/142, in_queue=1696, util=85.57% 00:15:31.169 nvme0n2: ios=1073/1186, merge=0/0, ticks=556/409, in_queue=965, util=89.63% 00:15:31.169 nvme0n3: ios=328/512, merge=0/0, ticks=725/206, in_queue=931, util=95.51% 00:15:31.169 nvme0n4: ios=1084/1097, merge=0/0, ticks=856/325, in_queue=1181, util=96.21% 00:15:31.169 00:52:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:31.169 [global] 00:15:31.169 thread=1 00:15:31.169 invalidate=1 00:15:31.169 rw=randwrite 00:15:31.169 time_based=1 00:15:31.169 runtime=1 00:15:31.169 ioengine=libaio 00:15:31.169 direct=1 00:15:31.169 bs=4096 00:15:31.169 iodepth=1 00:15:31.169 norandommap=0 00:15:31.169 numjobs=1 00:15:31.169 00:15:31.169 verify_dump=1 00:15:31.169 verify_backlog=512 00:15:31.169 verify_state_save=0 00:15:31.169 do_verify=1 00:15:31.169 verify=crc32c-intel 00:15:31.169 [job0] 00:15:31.169 filename=/dev/nvme0n1 00:15:31.169 [job1] 00:15:31.169 filename=/dev/nvme0n2 00:15:31.169 [job2] 00:15:31.169 filename=/dev/nvme0n3 00:15:31.169 [job3] 00:15:31.169 filename=/dev/nvme0n4 00:15:31.169 Could not set queue depth (nvme0n1) 00:15:31.169 Could not set queue depth (nvme0n2) 00:15:31.169 Could not set queue depth (nvme0n3) 00:15:31.169 Could not set queue depth (nvme0n4) 00:15:31.169 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:31.169 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:31.169 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:31.169 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:31.169 fio-3.35 00:15:31.169 Starting 4 threads 00:15:32.542 00:15:32.542 job0: (groupid=0, jobs=1): err= 0: pid=4013428: Wed May 15 00:52:19 2024 00:15:32.542 read: IOPS=73, BW=295KiB/s (303kB/s)(304KiB/1029msec) 00:15:32.542 slat (nsec): min=7304, max=35530, avg=12591.59, stdev=7656.58 00:15:32.542 clat (usec): min=306, max=41461, avg=11631.87, stdev=18286.49 00:15:32.543 lat (usec): min=313, max=41491, avg=11644.46, stdev=18291.11 00:15:32.543 clat percentiles (usec): 00:15:32.543 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 330], 00:15:32.543 | 30.00th=[ 343], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 461], 00:15:32.543 | 70.00th=[ 594], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:32.543 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:32.543 | 99.99th=[41681] 00:15:32.543 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:15:32.543 slat (nsec): min=7503, max=41815, avg=14331.67, stdev=7369.46 00:15:32.543 clat (usec): min=205, max=446, avg=261.52, stdev=37.28 00:15:32.543 lat (usec): min=214, max=465, avg=275.85, stdev=41.81 00:15:32.543 clat percentiles (usec): 00:15:32.543 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:15:32.543 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 260], 00:15:32.543 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 330], 00:15:32.543 | 99.00th=[ 375], 99.50th=[ 424], 99.90th=[ 445], 99.95th=[ 445], 00:15:32.543 | 99.99th=[ 445] 00:15:32.543 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=1 00:15:32.543 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:32.543 lat (usec) : 250=42.52%, 500=53.57%, 750=0.17% 00:15:32.543 lat (msec) : 2=0.17%, 50=3.57% 00:15:32.543 cpu : usr=0.19%, sys=1.36%, ctx=588, majf=0, minf=1 00:15:32.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:32.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.543 issued rwts: total=76,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:32.543 job1: (groupid=0, jobs=1): err= 0: pid=4013429: Wed May 15 00:52:19 2024 00:15:32.543 read: IOPS=995, BW=3981KiB/s (4076kB/s)(4112KiB/1033msec) 00:15:32.543 slat (nsec): min=5694, max=60378, avg=14624.90, stdev=8998.94 00:15:32.543 clat (usec): min=316, max=42473, avg=584.39, stdev=2893.63 00:15:32.543 lat (usec): min=321, max=42491, avg=599.02, stdev=2894.43 00:15:32.543 clat percentiles (usec): 00:15:32.543 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:15:32.543 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:15:32.543 | 70.00th=[ 388], 80.00th=[ 416], 90.00th=[ 465], 95.00th=[ 494], 00:15:32.543 | 99.00th=[ 537], 99.50th=[ 586], 99.90th=[42206], 99.95th=[42730], 00:15:32.543 | 99.99th=[42730] 00:15:32.543 write: IOPS=1486, BW=5948KiB/s (6090kB/s)(6144KiB/1033msec); 0 zone resets 00:15:32.543 slat (nsec): min=7280, max=85343, avg=13751.51, stdev=6961.24 00:15:32.543 clat (usec): min=189, max=1783, avg=251.16, stdev=62.94 00:15:32.543 lat (usec): min=196, max=1800, avg=264.91, stdev=65.87 00:15:32.543 clat percentiles (usec): 00:15:32.543 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 215], 00:15:32.543 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 251], 00:15:32.543 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 322], 00:15:32.543 | 99.00th=[ 429], 99.50th=[ 469], 99.90th=[ 1139], 99.95th=[ 1778], 00:15:32.543 | 99.99th=[ 1778] 00:15:32.543 bw ( KiB/s): min= 4264, max= 8024, per=51.65%, avg=6144.00, stdev=2658.72, samples=2 00:15:32.543 iops : min= 1066, max= 2006, avg=1536.00, stdev=664.68, samples=2 00:15:32.543 lat (usec) : 250=35.41%, 500=62.83%, 750=1.48% 00:15:32.543 lat (msec) : 2=0.08%, 50=0.20% 00:15:32.543 cpu : usr=2.13%, sys=3.68%, ctx=2566, majf=0, minf=1 00:15:32.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:32.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.543 issued rwts: total=1028,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:32.543 job2: (groupid=0, jobs=1): err= 0: pid=4013430: Wed May 15 00:52:19 2024 00:15:32.543 read: IOPS=20, BW=81.3KiB/s (83.3kB/s)(84.0KiB/1033msec) 00:15:32.543 slat (nsec): min=7623, max=32687, avg=19038.05, stdev=6928.31 00:15:32.543 clat (usec): min=40944, max=41264, avg=40989.78, stdev=65.29 00:15:32.543 lat (usec): min=40971, max=41272, avg=41008.82, stdev=62.04 00:15:32.543 clat percentiles (usec): 00:15:32.543 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:32.543 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:32.543 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:32.543 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:32.543 | 99.99th=[41157] 00:15:32.543 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:15:32.543 slat (nsec): min=7298, max=52545, avg=14614.00, stdev=8554.72 00:15:32.543 clat (usec): min=227, max=783, avg=316.36, stdev=73.27 00:15:32.543 lat (usec): min=234, max=798, avg=330.98, stdev=75.36 00:15:32.543 clat percentiles (usec): 00:15:32.543 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 253], 00:15:32.543 | 30.00th=[ 262], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 314], 00:15:32.543 | 70.00th=[ 347], 80.00th=[ 375], 90.00th=[ 408], 95.00th=[ 453], 00:15:32.543 | 99.00th=[ 545], 99.50th=[ 611], 99.90th=[ 783], 99.95th=[ 783], 00:15:32.543 | 99.99th=[ 783] 00:15:32.543 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=1 00:15:32.543 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:32.543 lat (usec) : 250=14.63%, 500=78.99%, 750=2.25%, 1000=0.19% 00:15:32.543 lat (msec) : 50=3.94% 00:15:32.543 cpu : usr=0.68%, sys=0.39%, ctx=534, majf=0, minf=1 00:15:32.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:32.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.543 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:32.543 job3: (groupid=0, jobs=1): err= 0: pid=4013431: Wed May 15 00:52:19 2024 00:15:32.543 read: IOPS=20, BW=82.5KiB/s (84.5kB/s)(84.0KiB/1018msec) 00:15:32.543 slat (nsec): min=8309, max=38197, avg=23200.10, stdev=11380.95 00:15:32.543 clat (usec): min=480, max=41481, avg=39088.86, stdev=8847.64 00:15:32.543 lat (usec): min=493, max=41493, avg=39112.06, stdev=8849.75 00:15:32.543 clat percentiles (usec): 00:15:32.543 | 1.00th=[ 482], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:32.543 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:32.543 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:32.543 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:32.543 | 99.99th=[41681] 00:15:32.543 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:15:32.543 slat (nsec): min=8328, max=47702, avg=14955.99, stdev=7065.09 00:15:32.543 clat (usec): min=198, max=2572, avg=364.09, stdev=144.51 00:15:32.543 lat (usec): min=207, max=2585, avg=379.05, stdev=146.28 00:15:32.543 clat percentiles (usec): 00:15:32.543 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 265], 00:15:32.543 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 383], 00:15:32.543 | 70.00th=[ 396], 80.00th=[ 408], 90.00th=[ 441], 95.00th=[ 486], 00:15:32.543 | 99.00th=[ 537], 99.50th=[ 594], 99.90th=[ 2573], 99.95th=[ 2573], 00:15:32.543 | 99.99th=[ 2573] 00:15:32.543 bw ( KiB/s): min= 4096, max= 4096, per=34.43%, avg=4096.00, stdev= 0.00, samples=1 00:15:32.543 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:32.543 lat (usec) : 250=18.20%, 500=74.48%, 750=3.19% 00:15:32.543 lat (msec) : 2=0.19%, 4=0.19%, 50=3.75% 00:15:32.543 cpu : usr=0.59%, sys=0.88%, ctx=535, majf=0, minf=1 00:15:32.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:32.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.543 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:32.543 00:15:32.543 Run status group 0 (all jobs): 00:15:32.543 READ: bw=4438KiB/s (4544kB/s), 81.3KiB/s-3981KiB/s (83.3kB/s-4076kB/s), io=4584KiB (4694kB), run=1018-1033msec 00:15:32.543 WRITE: bw=11.6MiB/s (12.2MB/s), 1983KiB/s-5948KiB/s (2030kB/s-6090kB/s), io=12.0MiB (12.6MB), run=1018-1033msec 00:15:32.543 00:15:32.543 Disk stats (read/write): 00:15:32.543 nvme0n1: ios=119/512, merge=0/0, ticks=710/130, in_queue=840, util=87.27% 00:15:32.543 nvme0n2: ios=1068/1486, merge=0/0, ticks=619/366, in_queue=985, util=97.66% 00:15:32.543 nvme0n3: ios=75/512, merge=0/0, ticks=996/162, in_queue=1158, util=98.23% 00:15:32.543 nvme0n4: ios=67/512, merge=0/0, ticks=964/179, in_queue=1143, util=97.16% 00:15:32.543 00:52:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:32.543 [global] 00:15:32.543 thread=1 00:15:32.543 invalidate=1 00:15:32.543 rw=write 00:15:32.543 time_based=1 00:15:32.543 runtime=1 00:15:32.543 ioengine=libaio 00:15:32.543 direct=1 00:15:32.543 bs=4096 00:15:32.543 iodepth=128 00:15:32.543 norandommap=0 00:15:32.543 numjobs=1 00:15:32.543 00:15:32.543 verify_dump=1 00:15:32.543 verify_backlog=512 00:15:32.543 verify_state_save=0 00:15:32.543 do_verify=1 00:15:32.543 verify=crc32c-intel 00:15:32.543 [job0] 00:15:32.543 filename=/dev/nvme0n1 00:15:32.543 [job1] 00:15:32.543 filename=/dev/nvme0n2 00:15:32.543 [job2] 00:15:32.543 filename=/dev/nvme0n3 00:15:32.543 [job3] 00:15:32.543 filename=/dev/nvme0n4 00:15:32.543 Could not set queue depth (nvme0n1) 00:15:32.543 Could not set queue depth (nvme0n2) 00:15:32.543 Could not set queue depth (nvme0n3) 00:15:32.543 Could not set queue depth (nvme0n4) 00:15:32.811 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.811 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.811 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.811 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.811 fio-3.35 00:15:32.811 Starting 4 threads 00:15:34.196 00:15:34.196 job0: (groupid=0, jobs=1): err= 0: pid=4013611: Wed May 15 00:52:20 2024 00:15:34.196 read: IOPS=4606, BW=18.0MiB/s (18.9MB/s)(18.2MiB/1009msec) 00:15:34.196 slat (usec): min=3, max=7658, avg=95.26, stdev=457.85 00:15:34.196 clat (usec): min=7827, max=24976, avg=12471.54, stdev=1839.40 00:15:34.196 lat (usec): min=8305, max=24983, avg=12566.80, stdev=1831.21 00:15:34.196 clat percentiles (usec): 00:15:34.196 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11338], 00:15:34.196 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:15:34.196 | 70.00th=[12911], 80.00th=[13698], 90.00th=[15401], 95.00th=[16057], 00:15:34.196 | 99.00th=[17433], 99.50th=[18220], 99.90th=[25035], 99.95th=[25035], 00:15:34.196 | 99.99th=[25035] 00:15:34.196 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:15:34.196 slat (usec): min=5, max=20777, avg=98.33, stdev=549.04 00:15:34.196 clat (usec): min=7806, max=33046, avg=13087.76, stdev=3838.41 00:15:34.196 lat (usec): min=7826, max=33067, avg=13186.09, stdev=3858.23 00:15:34.196 clat percentiles (usec): 00:15:34.196 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11076], 00:15:34.196 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12518], 00:15:34.196 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15926], 95.00th=[21365], 00:15:34.196 | 99.00th=[29754], 99.50th=[31065], 99.90th=[32900], 99.95th=[32900], 00:15:34.196 | 99.99th=[33162] 00:15:34.196 bw ( KiB/s): min=19712, max=20552, per=37.69%, avg=20132.00, stdev=593.97, samples=2 00:15:34.196 iops : min= 4928, max= 5138, avg=5033.00, stdev=148.49, samples=2 00:15:34.196 lat (msec) : 10=6.82%, 20=89.88%, 50=3.31% 00:15:34.196 cpu : usr=6.85%, sys=11.11%, ctx=506, majf=0, minf=9 00:15:34.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:34.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.196 issued rwts: total=4648,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.196 job1: (groupid=0, jobs=1): err= 0: pid=4013612: Wed May 15 00:52:20 2024 00:15:34.196 read: IOPS=1785, BW=7143KiB/s (7315kB/s)(7172KiB/1004msec) 00:15:34.196 slat (usec): min=3, max=22411, avg=248.54, stdev=1410.18 00:15:34.196 clat (usec): min=704, max=82605, avg=28121.86, stdev=14631.44 00:15:34.196 lat (usec): min=7935, max=82624, avg=28370.40, stdev=14723.92 00:15:34.196 clat percentiles (usec): 00:15:34.196 | 1.00th=[ 8160], 5.00th=[10945], 10.00th=[11731], 20.00th=[14746], 00:15:34.196 | 30.00th=[16319], 40.00th=[21103], 50.00th=[26870], 60.00th=[32113], 00:15:34.196 | 70.00th=[33817], 80.00th=[38011], 90.00th=[46924], 95.00th=[55837], 00:15:34.196 | 99.00th=[72877], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:15:34.196 | 99.99th=[82314] 00:15:34.196 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:15:34.196 slat (usec): min=4, max=18101, avg=263.01, stdev=1318.83 00:15:34.196 clat (usec): min=8281, max=91803, avg=36896.86, stdev=19725.61 00:15:34.196 lat (usec): min=8291, max=91825, avg=37159.88, stdev=19820.32 00:15:34.196 clat percentiles (usec): 00:15:34.196 | 1.00th=[13042], 5.00th=[15270], 10.00th=[19268], 20.00th=[20055], 00:15:34.196 | 30.00th=[20841], 40.00th=[22152], 50.00th=[35390], 60.00th=[40633], 00:15:34.196 | 70.00th=[45351], 80.00th=[51119], 90.00th=[66323], 95.00th=[78119], 00:15:34.196 | 99.00th=[91751], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:15:34.196 | 99.99th=[91751] 00:15:34.196 bw ( KiB/s): min= 5296, max=11088, per=15.34%, avg=8192.00, stdev=4095.56, samples=2 00:15:34.197 iops : min= 1324, max= 2772, avg=2048.00, stdev=1023.89, samples=2 00:15:34.197 lat (usec) : 750=0.03% 00:15:34.197 lat (msec) : 10=2.06%, 20=24.34%, 50=58.47%, 100=15.10% 00:15:34.197 cpu : usr=2.39%, sys=3.79%, ctx=208, majf=0, minf=15 00:15:34.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:15:34.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.197 issued rwts: total=1793,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.197 job2: (groupid=0, jobs=1): err= 0: pid=4013613: Wed May 15 00:52:20 2024 00:15:34.197 read: IOPS=4388, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1002msec) 00:15:34.197 slat (usec): min=3, max=27048, avg=105.79, stdev=825.74 00:15:34.197 clat (usec): min=719, max=46118, avg=14300.78, stdev=4897.60 00:15:34.197 lat (usec): min=733, max=46153, avg=14406.57, stdev=4938.34 00:15:34.197 clat percentiles (usec): 00:15:34.197 | 1.00th=[ 2376], 5.00th=[ 7308], 10.00th=[ 8455], 20.00th=[11338], 00:15:34.197 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13435], 60.00th=[15008], 00:15:34.197 | 70.00th=[16450], 80.00th=[17695], 90.00th=[19006], 95.00th=[20055], 00:15:34.197 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:15:34.197 | 99.99th=[45876] 00:15:34.197 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:15:34.197 slat (usec): min=4, max=16696, avg=103.59, stdev=781.19 00:15:34.197 clat (usec): min=349, max=35532, avg=13677.41, stdev=4326.70 00:15:34.197 lat (usec): min=388, max=35554, avg=13781.00, stdev=4365.01 00:15:34.197 clat percentiles (usec): 00:15:34.197 | 1.00th=[ 914], 5.00th=[ 6128], 10.00th=[ 8291], 20.00th=[11600], 00:15:34.197 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13435], 60.00th=[14222], 00:15:34.197 | 70.00th=[14877], 80.00th=[16909], 90.00th=[19530], 95.00th=[21103], 00:15:34.197 | 99.00th=[25822], 99.50th=[25822], 99.90th=[29492], 99.95th=[30802], 00:15:34.197 | 99.99th=[35390] 00:15:34.197 bw ( KiB/s): min=16384, max=20480, per=34.51%, avg=18432.00, stdev=2896.31, samples=2 00:15:34.197 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:15:34.197 lat (usec) : 500=0.02%, 750=0.22%, 1000=0.48% 00:15:34.197 lat (msec) : 2=0.40%, 4=1.04%, 10=14.19%, 20=77.02%, 50=6.62% 00:15:34.197 cpu : usr=4.50%, sys=7.19%, ctx=281, majf=0, minf=13 00:15:34.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:34.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.197 issued rwts: total=4397,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.197 job3: (groupid=0, jobs=1): err= 0: pid=4013615: Wed May 15 00:52:20 2024 00:15:34.197 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec) 00:15:34.197 slat (usec): min=4, max=35549, avg=334.58, stdev=2033.34 00:15:34.197 clat (usec): min=20619, max=91897, avg=42379.45, stdev=19299.39 00:15:34.197 lat (usec): min=23874, max=91913, avg=42714.04, stdev=19363.15 00:15:34.197 clat percentiles (usec): 00:15:34.197 | 1.00th=[24773], 5.00th=[27919], 10.00th=[29492], 20.00th=[30540], 00:15:34.197 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[33162], 00:15:34.197 | 70.00th=[35914], 80.00th=[60556], 90.00th=[79168], 95.00th=[86508], 00:15:34.197 | 99.00th=[91751], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:15:34.197 | 99.99th=[91751] 00:15:34.197 write: IOPS=1691, BW=6768KiB/s (6930kB/s)(6788KiB/1003msec); 0 zone resets 00:15:34.197 slat (usec): min=5, max=17209, avg=276.73, stdev=1369.27 00:15:34.197 clat (usec): min=572, max=105245, avg=36059.49, stdev=15887.37 00:15:34.197 lat (msec): min=7, max=105, avg=36.34, stdev=15.93 00:15:34.197 clat percentiles (msec): 00:15:34.197 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 28], 00:15:34.197 | 30.00th=[ 30], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:15:34.197 | 70.00th=[ 39], 80.00th=[ 45], 90.00th=[ 54], 95.00th=[ 73], 00:15:34.197 | 99.00th=[ 106], 99.50th=[ 106], 99.90th=[ 106], 99.95th=[ 106], 00:15:34.197 | 99.99th=[ 106] 00:15:34.197 bw ( KiB/s): min= 4360, max= 8192, per=11.75%, avg=6276.00, stdev=2709.63, samples=2 00:15:34.197 iops : min= 1090, max= 2048, avg=1569.00, stdev=677.41, samples=2 00:15:34.197 lat (usec) : 750=0.03% 00:15:34.197 lat (msec) : 10=0.99%, 20=3.50%, 50=77.30%, 100=17.66%, 250=0.53% 00:15:34.197 cpu : usr=1.90%, sys=4.19%, ctx=158, majf=0, minf=13 00:15:34.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:15:34.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.197 issued rwts: total=1536,1697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.197 00:15:34.197 Run status group 0 (all jobs): 00:15:34.197 READ: bw=47.9MiB/s (50.2MB/s), 6126KiB/s-18.0MiB/s (6273kB/s-18.9MB/s), io=48.3MiB (50.7MB), run=1002-1009msec 00:15:34.197 WRITE: bw=52.2MiB/s (54.7MB/s), 6768KiB/s-19.8MiB/s (6930kB/s-20.8MB/s), io=52.6MiB (55.2MB), run=1002-1009msec 00:15:34.197 00:15:34.197 Disk stats (read/write): 00:15:34.197 nvme0n1: ios=4148/4361, merge=0/0, ticks=14944/15370, in_queue=30314, util=93.79% 00:15:34.197 nvme0n2: ios=1580/1807, merge=0/0, ticks=13401/15547, in_queue=28948, util=97.97% 00:15:34.197 nvme0n3: ios=3644/3898, merge=0/0, ticks=35681/33168, in_queue=68849, util=98.23% 00:15:34.197 nvme0n4: ios=1104/1536, merge=0/0, ticks=13000/14654, in_queue=27654, util=98.11% 00:15:34.197 00:52:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:34.197 [global] 00:15:34.197 thread=1 00:15:34.197 invalidate=1 00:15:34.197 rw=randwrite 00:15:34.197 time_based=1 00:15:34.197 runtime=1 00:15:34.197 ioengine=libaio 00:15:34.197 direct=1 00:15:34.197 bs=4096 00:15:34.197 iodepth=128 00:15:34.197 norandommap=0 00:15:34.197 numjobs=1 00:15:34.197 00:15:34.197 verify_dump=1 00:15:34.197 verify_backlog=512 00:15:34.197 verify_state_save=0 00:15:34.197 do_verify=1 00:15:34.197 verify=crc32c-intel 00:15:34.197 [job0] 00:15:34.197 filename=/dev/nvme0n1 00:15:34.197 [job1] 00:15:34.197 filename=/dev/nvme0n2 00:15:34.197 [job2] 00:15:34.197 filename=/dev/nvme0n3 00:15:34.197 [job3] 00:15:34.197 filename=/dev/nvme0n4 00:15:34.197 Could not set queue depth (nvme0n1) 00:15:34.197 Could not set queue depth (nvme0n2) 00:15:34.197 Could not set queue depth (nvme0n3) 00:15:34.197 Could not set queue depth (nvme0n4) 00:15:34.197 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:34.197 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:34.197 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:34.197 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:34.197 fio-3.35 00:15:34.197 Starting 4 threads 00:15:35.576 00:15:35.576 job0: (groupid=0, jobs=1): err= 0: pid=4013793: Wed May 15 00:52:22 2024 00:15:35.576 read: IOPS=4206, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec) 00:15:35.576 slat (usec): min=3, max=13242, avg=100.00, stdev=631.05 00:15:35.576 clat (usec): min=881, max=40243, avg=12986.23, stdev=3738.40 00:15:35.576 lat (usec): min=5319, max=40889, avg=13086.23, stdev=3777.81 00:15:35.576 clat percentiles (usec): 00:15:35.576 | 1.00th=[ 6063], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10814], 00:15:35.576 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12387], 60.00th=[12911], 00:15:35.576 | 70.00th=[13435], 80.00th=[14222], 90.00th=[16581], 95.00th=[20841], 00:15:35.576 | 99.00th=[24249], 99.50th=[29230], 99.90th=[40109], 99.95th=[40109], 00:15:35.576 | 99.99th=[40109] 00:15:35.576 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:15:35.576 slat (usec): min=5, max=15495, avg=112.74, stdev=659.04 00:15:35.576 clat (usec): min=692, max=84571, avg=15660.97, stdev=11693.49 00:15:35.576 lat (usec): min=1216, max=84597, avg=15773.71, stdev=11754.93 00:15:35.576 clat percentiles (usec): 00:15:35.576 | 1.00th=[ 5932], 5.00th=[ 7177], 10.00th=[ 9241], 20.00th=[10683], 00:15:35.576 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[12911], 00:15:35.576 | 70.00th=[13566], 80.00th=[16581], 90.00th=[29230], 95.00th=[34866], 00:15:35.576 | 99.00th=[73925], 99.50th=[82314], 99.90th=[84411], 99.95th=[84411], 00:15:35.576 | 99.99th=[84411] 00:15:35.576 bw ( KiB/s): min=17400, max=19456, per=28.57%, avg=18428.00, stdev=1453.81, samples=2 00:15:35.576 iops : min= 4350, max= 4864, avg=4607.00, stdev=363.45, samples=2 00:15:35.576 lat (usec) : 750=0.01%, 1000=0.01% 00:15:35.576 lat (msec) : 4=0.01%, 10=13.25%, 20=76.45%, 50=8.40%, 100=1.87% 00:15:35.576 cpu : usr=6.88%, sys=10.57%, ctx=453, majf=0, minf=11 00:15:35.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:35.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.576 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.576 job1: (groupid=0, jobs=1): err= 0: pid=4013800: Wed May 15 00:52:22 2024 00:15:35.576 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:15:35.576 slat (usec): min=2, max=18765, avg=89.47, stdev=744.06 00:15:35.576 clat (usec): min=4830, max=54219, avg=15035.62, stdev=5044.94 00:15:35.576 lat (usec): min=4835, max=54226, avg=15125.09, stdev=5064.20 00:15:35.576 clat percentiles (usec): 00:15:35.576 | 1.00th=[ 8455], 5.00th=[10552], 10.00th=[11600], 20.00th=[12125], 00:15:35.576 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[14353], 00:15:35.576 | 70.00th=[15139], 80.00th=[16909], 90.00th=[21890], 95.00th=[28705], 00:15:35.576 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:15:35.576 | 99.99th=[54264] 00:15:35.576 write: IOPS=4638, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1007msec); 0 zone resets 00:15:35.576 slat (usec): min=3, max=16744, avg=82.86, stdev=694.69 00:15:35.576 clat (usec): min=1809, max=27539, avg=12540.26, stdev=3615.78 00:15:35.576 lat (usec): min=1819, max=29220, avg=12623.12, stdev=3664.22 00:15:35.576 clat percentiles (usec): 00:15:35.576 | 1.00th=[ 4752], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 9372], 00:15:35.576 | 30.00th=[10814], 40.00th=[11994], 50.00th=[12387], 60.00th=[12911], 00:15:35.576 | 70.00th=[13173], 80.00th=[15533], 90.00th=[17433], 95.00th=[19006], 00:15:35.576 | 99.00th=[21890], 99.50th=[25560], 99.90th=[27132], 99.95th=[27132], 00:15:35.576 | 99.99th=[27657] 00:15:35.576 bw ( KiB/s): min=17282, max=19616, per=28.61%, avg=18449.00, stdev=1650.39, samples=2 00:15:35.576 iops : min= 4320, max= 4904, avg=4612.00, stdev=412.95, samples=2 00:15:35.576 lat (msec) : 2=0.05%, 4=0.24%, 10=13.14%, 20=78.73%, 50=7.83% 00:15:35.576 lat (msec) : 100=0.01% 00:15:35.576 cpu : usr=4.57%, sys=8.65%, ctx=341, majf=0, minf=13 00:15:35.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:35.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.576 issued rwts: total=4608,4671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.576 job2: (groupid=0, jobs=1): err= 0: pid=4013801: Wed May 15 00:52:22 2024 00:15:35.576 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:15:35.576 slat (usec): min=3, max=15674, avg=122.27, stdev=775.03 00:15:35.576 clat (usec): min=7675, max=31879, avg=16555.67, stdev=3055.91 00:15:35.576 lat (usec): min=7682, max=31917, avg=16677.94, stdev=3118.50 00:15:35.576 clat percentiles (usec): 00:15:35.576 | 1.00th=[11207], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:15:35.576 | 30.00th=[14091], 40.00th=[15795], 50.00th=[16581], 60.00th=[17695], 00:15:35.576 | 70.00th=[18220], 80.00th=[18744], 90.00th=[20317], 95.00th=[22414], 00:15:35.576 | 99.00th=[23462], 99.50th=[27395], 99.90th=[28443], 99.95th=[31065], 00:15:35.576 | 99.99th=[31851] 00:15:35.576 write: IOPS=3869, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1004msec); 0 zone resets 00:15:35.576 slat (usec): min=5, max=15532, avg=132.00, stdev=796.28 00:15:35.576 clat (usec): min=612, max=42672, avg=17495.03, stdev=5557.92 00:15:35.576 lat (usec): min=3806, max=42680, avg=17627.02, stdev=5563.85 00:15:35.576 clat percentiles (usec): 00:15:35.576 | 1.00th=[ 5735], 5.00th=[ 9110], 10.00th=[10945], 20.00th=[13435], 00:15:35.576 | 30.00th=[14222], 40.00th=[15533], 50.00th=[16909], 60.00th=[19006], 00:15:35.576 | 70.00th=[20579], 80.00th=[21627], 90.00th=[22152], 95.00th=[26084], 00:15:35.576 | 99.00th=[39584], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:15:35.576 | 99.99th=[42730] 00:15:35.576 bw ( KiB/s): min=13672, max=16384, per=23.30%, avg=15028.00, stdev=1917.67, samples=2 00:15:35.576 iops : min= 3418, max= 4096, avg=3757.00, stdev=479.42, samples=2 00:15:35.576 lat (usec) : 750=0.01% 00:15:35.577 lat (msec) : 4=0.08%, 10=3.98%, 20=71.28%, 50=24.65% 00:15:35.577 cpu : usr=5.78%, sys=8.77%, ctx=281, majf=0, minf=15 00:15:35.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:35.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.577 issued rwts: total=3584,3885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.577 job3: (groupid=0, jobs=1): err= 0: pid=4013802: Wed May 15 00:52:22 2024 00:15:35.577 read: IOPS=2900, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1006msec) 00:15:35.577 slat (usec): min=4, max=71756, avg=182.70, stdev=1867.93 00:15:35.577 clat (msec): min=2, max=150, avg=22.46, stdev=22.77 00:15:35.577 lat (msec): min=10, max=150, avg=22.64, stdev=22.96 00:15:35.577 clat percentiles (msec): 00:15:35.577 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:15:35.577 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:15:35.577 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 47], 95.00th=[ 86], 00:15:35.577 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 121], 00:15:35.577 | 99.99th=[ 150] 00:15:35.577 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:15:35.577 slat (usec): min=5, max=23296, avg=140.33, stdev=928.90 00:15:35.577 clat (usec): min=7226, max=82110, avg=19316.16, stdev=12031.29 00:15:35.577 lat (usec): min=8146, max=82126, avg=19456.49, stdev=12094.03 00:15:35.577 clat percentiles (usec): 00:15:35.577 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[11994], 20.00th=[13042], 00:15:35.577 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14353], 60.00th=[14877], 00:15:35.577 | 70.00th=[16057], 80.00th=[26870], 90.00th=[35390], 95.00th=[45876], 00:15:35.577 | 99.00th=[61080], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:15:35.577 | 99.99th=[82314] 00:15:35.577 bw ( KiB/s): min=12288, max=12312, per=19.07%, avg=12300.00, stdev=16.97, samples=2 00:15:35.577 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:15:35.577 lat (msec) : 4=0.02%, 10=1.85%, 20=78.31%, 50=13.47%, 100=5.44% 00:15:35.577 lat (msec) : 250=0.90% 00:15:35.577 cpu : usr=3.88%, sys=8.06%, ctx=309, majf=0, minf=11 00:15:35.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:15:35.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.577 issued rwts: total=2918,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.577 00:15:35.577 Run status group 0 (all jobs): 00:15:35.577 READ: bw=59.5MiB/s (62.4MB/s), 11.3MiB/s-17.9MiB/s (11.9MB/s-18.7MB/s), io=59.9MiB (62.8MB), run=1004-1007msec 00:15:35.577 WRITE: bw=63.0MiB/s (66.0MB/s), 11.9MiB/s-18.1MiB/s (12.5MB/s-19.0MB/s), io=63.4MiB (66.5MB), run=1004-1007msec 00:15:35.577 00:15:35.577 Disk stats (read/write): 00:15:35.577 nvme0n1: ios=3236/3561, merge=0/0, ticks=29721/34052, in_queue=63773, util=92.38% 00:15:35.577 nvme0n2: ios=3634/3880, merge=0/0, ticks=37893/36452, in_queue=74345, util=87.81% 00:15:35.577 nvme0n3: ios=3078/3072, merge=0/0, ticks=31967/30134, in_queue=62101, util=99.03% 00:15:35.577 nvme0n4: ios=2623/3072, merge=0/0, ticks=19069/27950, in_queue=47019, util=89.32% 00:15:35.577 00:52:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:35.577 00:52:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4013926 00:15:35.577 00:52:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:35.577 00:52:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:35.577 [global] 00:15:35.577 thread=1 00:15:35.577 invalidate=1 00:15:35.577 rw=read 00:15:35.577 time_based=1 00:15:35.577 runtime=10 00:15:35.577 ioengine=libaio 00:15:35.577 direct=1 00:15:35.577 bs=4096 00:15:35.577 iodepth=1 00:15:35.577 norandommap=1 00:15:35.577 numjobs=1 00:15:35.577 00:15:35.577 [job0] 00:15:35.577 filename=/dev/nvme0n1 00:15:35.577 [job1] 00:15:35.577 filename=/dev/nvme0n2 00:15:35.577 [job2] 00:15:35.577 filename=/dev/nvme0n3 00:15:35.577 [job3] 00:15:35.577 filename=/dev/nvme0n4 00:15:35.577 Could not set queue depth (nvme0n1) 00:15:35.577 Could not set queue depth (nvme0n2) 00:15:35.577 Could not set queue depth (nvme0n3) 00:15:35.577 Could not set queue depth (nvme0n4) 00:15:35.577 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:35.577 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:35.577 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:35.577 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:35.577 fio-3.35 00:15:35.577 Starting 4 threads 00:15:38.865 00:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:38.865 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=24432640, buflen=4096 00:15:38.865 fio: pid=4014071, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:38.865 00:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:39.123 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26284032, buflen=4096 00:15:39.123 fio: pid=4014070, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:39.123 00:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:39.123 00:52:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:39.383 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=33435648, buflen=4096 00:15:39.383 fio: pid=4014068, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:39.383 00:52:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:39.383 00:52:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:39.642 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=33402880, buflen=4096 00:15:39.642 fio: pid=4014069, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:39.642 00:52:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:39.642 00:52:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:39.642 00:15:39.642 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4014068: Wed May 15 00:52:26 2024 00:15:39.642 read: IOPS=2340, BW=9361KiB/s (9586kB/s)(31.9MiB/3488msec) 00:15:39.642 slat (usec): min=4, max=15516, avg=17.02, stdev=294.02 00:15:39.642 clat (usec): min=287, max=41438, avg=407.40, stdev=458.45 00:15:39.642 lat (usec): min=294, max=41452, avg=424.42, stdev=545.31 00:15:39.642 clat percentiles (usec): 00:15:39.642 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 371], 00:15:39.642 | 30.00th=[ 383], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 404], 00:15:39.642 | 70.00th=[ 416], 80.00th=[ 429], 90.00th=[ 453], 95.00th=[ 498], 00:15:39.642 | 99.00th=[ 562], 99.50th=[ 619], 99.90th=[ 988], 99.95th=[ 1237], 00:15:39.642 | 99.99th=[41681] 00:15:39.642 bw ( KiB/s): min= 8768, max=10123, per=30.63%, avg=9291.17, stdev=543.66, samples=6 00:15:39.642 iops : min= 2192, max= 2530, avg=2322.67, stdev=135.69, samples=6 00:15:39.642 lat (usec) : 500=95.43%, 750=4.34%, 1000=0.12% 00:15:39.642 lat (msec) : 2=0.07%, 4=0.01%, 50=0.01% 00:15:39.642 cpu : usr=1.52%, sys=4.07%, ctx=8168, majf=0, minf=1 00:15:39.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.642 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.642 issued rwts: total=8164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.642 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4014069: Wed May 15 00:52:26 2024 00:15:39.642 read: IOPS=2154, BW=8618KiB/s (8825kB/s)(31.9MiB/3785msec) 00:15:39.642 slat (usec): min=5, max=18295, avg=21.49, stdev=390.63 00:15:39.642 clat (usec): min=278, max=41580, avg=439.94, stdev=1373.66 00:15:39.642 lat (usec): min=284, max=41591, avg=461.43, stdev=1429.01 00:15:39.642 clat percentiles (usec): 00:15:39.642 | 1.00th=[ 302], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 351], 00:15:39.642 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 392], 00:15:39.642 | 70.00th=[ 412], 80.00th=[ 433], 90.00th=[ 461], 95.00th=[ 482], 00:15:39.642 | 99.00th=[ 594], 99.50th=[ 685], 99.90th=[41157], 99.95th=[41157], 00:15:39.642 | 99.99th=[41681] 00:15:39.642 bw ( KiB/s): min= 4926, max=10168, per=28.13%, avg=8533.86, stdev=1817.37, samples=7 00:15:39.642 iops : min= 1231, max= 2542, avg=2133.29, stdev=454.50, samples=7 00:15:39.642 lat (usec) : 500=96.95%, 750=2.73%, 1000=0.10% 00:15:39.642 lat (msec) : 2=0.06%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.11% 00:15:39.642 cpu : usr=1.29%, sys=3.22%, ctx=8164, majf=0, minf=1 00:15:39.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.642 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.642 issued rwts: total=8156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.642 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4014070: Wed May 15 00:52:26 2024 00:15:39.642 read: IOPS=2009, BW=8036KiB/s (8229kB/s)(25.1MiB/3194msec) 00:15:39.642 slat (usec): min=4, max=16452, avg=16.69, stdev=225.97 00:15:39.642 clat (usec): min=303, max=41979, avg=477.96, stdev=1317.87 00:15:39.642 lat (usec): min=308, max=41994, avg=494.66, stdev=1337.11 00:15:39.642 clat percentiles (usec): 00:15:39.642 | 1.00th=[ 326], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 379], 00:15:39.642 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 420], 00:15:39.642 | 70.00th=[ 441], 80.00th=[ 474], 90.00th=[ 537], 95.00th=[ 594], 00:15:39.642 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[25297], 99.95th=[41681], 00:15:39.642 | 99.99th=[42206] 00:15:39.642 bw ( KiB/s): min= 4464, max= 9480, per=25.99%, avg=7882.67, stdev=1783.10, samples=6 00:15:39.642 iops : min= 1116, max= 2370, avg=1970.67, stdev=445.78, samples=6 00:15:39.642 lat (usec) : 500=85.77%, 750=11.86%, 1000=2.21% 00:15:39.642 lat (msec) : 2=0.02%, 20=0.02%, 50=0.11% 00:15:39.642 cpu : usr=1.10%, sys=3.19%, ctx=6423, majf=0, minf=1 00:15:39.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.642 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.642 issued rwts: total=6418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.642 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4014071: Wed May 15 00:52:26 2024 00:15:39.642 read: IOPS=2056, BW=8225KiB/s (8422kB/s)(23.3MiB/2901msec) 00:15:39.642 slat (nsec): min=6120, max=60082, avg=12318.48, stdev=5366.61 00:15:39.642 clat (usec): min=334, max=41325, avg=470.64, stdev=1054.91 00:15:39.642 lat (usec): min=341, max=41340, avg=482.96, stdev=1055.08 00:15:39.642 clat percentiles (usec): 00:15:39.642 | 1.00th=[ 359], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 388], 00:15:39.642 | 30.00th=[ 400], 40.00th=[ 408], 50.00th=[ 424], 60.00th=[ 441], 00:15:39.642 | 70.00th=[ 457], 80.00th=[ 482], 90.00th=[ 523], 95.00th=[ 586], 00:15:39.642 | 99.00th=[ 783], 99.50th=[ 791], 99.90th=[ 1045], 99.95th=[40633], 00:15:39.642 | 99.99th=[41157] 00:15:39.642 bw ( KiB/s): min= 5808, max= 8944, per=26.57%, avg=8059.20, stdev=1294.31, samples=5 00:15:39.642 iops : min= 1452, max= 2236, avg=2014.80, stdev=323.58, samples=5 00:15:39.642 lat (usec) : 500=85.85%, 750=11.83%, 1000=2.20% 00:15:39.642 lat (msec) : 2=0.03%, 50=0.07% 00:15:39.642 cpu : usr=2.07%, sys=3.76%, ctx=5966, majf=0, minf=1 00:15:39.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.642 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.642 issued rwts: total=5966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.642 00:15:39.642 Run status group 0 (all jobs): 00:15:39.642 READ: bw=29.6MiB/s (31.1MB/s), 8036KiB/s-9361KiB/s (8229kB/s-9586kB/s), io=112MiB (118MB), run=2901-3785msec 00:15:39.642 00:15:39.642 Disk stats (read/write): 00:15:39.642 nvme0n1: ios=7892/0, merge=0/0, ticks=3138/0, in_queue=3138, util=95.28% 00:15:39.642 nvme0n2: ios=7706/0, merge=0/0, ticks=3602/0, in_queue=3602, util=96.89% 00:15:39.642 nvme0n3: ios=6168/0, merge=0/0, ticks=2928/0, in_queue=2928, util=96.01% 00:15:39.642 nvme0n4: ios=5843/0, merge=0/0, ticks=2655/0, in_queue=2655, util=96.71% 00:15:39.902 00:52:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:39.902 00:52:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:40.218 00:52:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:40.218 00:52:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:40.533 00:52:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:40.533 00:52:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:40.793 00:52:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:40.793 00:52:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:41.052 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:41.052 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 4013926 00:15:41.052 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:41.052 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:41.311 nvmf hotplug test: fio failed as expected 00:15:41.311 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.570 rmmod nvme_tcp 00:15:41.570 rmmod nvme_fabrics 00:15:41.570 rmmod nvme_keyring 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:41.570 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4012416 ']' 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4012416 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 4012416 ']' 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 4012416 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4012416 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4012416' 00:15:41.571 killing process with pid 4012416 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 4012416 00:15:41.571 [2024-05-15 00:52:28.531873] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:41.571 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 4012416 00:15:41.831 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.831 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.831 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.831 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.831 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.831 00:52:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.831 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.831 00:52:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.367 00:52:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:44.367 00:15:44.367 real 0m23.193s 00:15:44.367 user 1m19.315s 00:15:44.367 sys 0m7.751s 00:15:44.367 00:52:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:44.367 00:52:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.367 ************************************ 00:15:44.367 END TEST nvmf_fio_target 00:15:44.367 ************************************ 00:15:44.367 00:52:30 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:44.367 00:52:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:44.367 00:52:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:44.367 00:52:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:44.367 ************************************ 00:15:44.367 START TEST nvmf_bdevio 00:15:44.367 ************************************ 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:44.367 * Looking for test storage... 00:15:44.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.367 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:44.368 00:52:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:45.749 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:45.749 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:45.749 Found net devices under 0000:08:00.0: cvl_0_0 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:45.749 Found net devices under 0000:08:00.1: cvl_0_1 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.749 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:45.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:15:45.750 00:15:45.750 --- 10.0.0.2 ping statistics --- 00:15:45.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.750 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:15:45.750 00:15:45.750 --- 10.0.0.1 ping statistics --- 00:15:45.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.750 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4016101 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4016101 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 4016101 ']' 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:45.750 00:52:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:45.750 [2024-05-15 00:52:32.738508] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:15:45.750 [2024-05-15 00:52:32.738607] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.750 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.750 [2024-05-15 00:52:32.805683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.009 [2024-05-15 00:52:32.926370] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.009 [2024-05-15 00:52:32.926435] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.009 [2024-05-15 00:52:32.926451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.009 [2024-05-15 00:52:32.926464] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.009 [2024-05-15 00:52:32.926476] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.009 [2024-05-15 00:52:32.926584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:46.009 [2024-05-15 00:52:32.926640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:46.009 [2024-05-15 00:52:32.926689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:46.009 [2024-05-15 00:52:32.926692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.009 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:46.009 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:15:46.009 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:46.009 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.009 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.267 [2024-05-15 00:52:33.076644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.267 Malloc0 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.267 [2024-05-15 00:52:33.126782] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:46.267 [2024-05-15 00:52:33.127079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:46.267 { 00:15:46.267 "params": { 00:15:46.267 "name": "Nvme$subsystem", 00:15:46.267 "trtype": "$TEST_TRANSPORT", 00:15:46.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:46.267 "adrfam": "ipv4", 00:15:46.267 "trsvcid": "$NVMF_PORT", 00:15:46.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:46.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:46.267 "hdgst": ${hdgst:-false}, 00:15:46.267 "ddgst": ${ddgst:-false} 00:15:46.267 }, 00:15:46.267 "method": "bdev_nvme_attach_controller" 00:15:46.267 } 00:15:46.267 EOF 00:15:46.267 )") 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:46.267 00:52:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:46.267 "params": { 00:15:46.267 "name": "Nvme1", 00:15:46.267 "trtype": "tcp", 00:15:46.267 "traddr": "10.0.0.2", 00:15:46.267 "adrfam": "ipv4", 00:15:46.267 "trsvcid": "4420", 00:15:46.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.267 "hdgst": false, 00:15:46.267 "ddgst": false 00:15:46.267 }, 00:15:46.267 "method": "bdev_nvme_attach_controller" 00:15:46.267 }' 00:15:46.267 [2024-05-15 00:52:33.173520] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:15:46.267 [2024-05-15 00:52:33.173611] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016126 ] 00:15:46.267 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.267 [2024-05-15 00:52:33.234546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:46.528 [2024-05-15 00:52:33.357198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.528 [2024-05-15 00:52:33.357277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.528 [2024-05-15 00:52:33.357312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.786 I/O targets: 00:15:46.786 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:46.786 00:15:46.786 00:15:46.786 CUnit - A unit testing framework for C - Version 2.1-3 00:15:46.786 http://cunit.sourceforge.net/ 00:15:46.786 00:15:46.786 00:15:46.786 Suite: bdevio tests on: Nvme1n1 00:15:46.786 Test: blockdev write read block ...passed 00:15:46.786 Test: blockdev write zeroes read block ...passed 00:15:46.786 Test: blockdev write zeroes read no split ...passed 00:15:46.786 Test: blockdev write zeroes read split ...passed 00:15:46.786 Test: blockdev write zeroes read split partial ...passed 00:15:46.786 Test: blockdev reset ...[2024-05-15 00:52:33.821492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.786 [2024-05-15 00:52:33.821615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127330 (9): Bad file descriptor 00:15:46.786 [2024-05-15 00:52:33.839292] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.786 passed 00:15:46.787 Test: blockdev write read 8 blocks ...passed 00:15:46.787 Test: blockdev write read size > 128k ...passed 00:15:46.787 Test: blockdev write read invalid size ...passed 00:15:47.044 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:47.044 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:47.044 Test: blockdev write read max offset ...passed 00:15:47.044 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:47.044 Test: blockdev writev readv 8 blocks ...passed 00:15:47.044 Test: blockdev writev readv 30 x 1block ...passed 00:15:47.044 Test: blockdev writev readv block ...passed 00:15:47.044 Test: blockdev writev readv size > 128k ...passed 00:15:47.044 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:47.044 Test: blockdev comparev and writev ...[2024-05-15 00:52:34.099423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.044 [2024-05-15 00:52:34.099468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:47.044 [2024-05-15 00:52:34.099496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.044 [2024-05-15 00:52:34.099513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:47.044 [2024-05-15 00:52:34.099924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.044 [2024-05-15 00:52:34.099957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:47.044 [2024-05-15 00:52:34.099981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.044 [2024-05-15 00:52:34.099999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:47.044 [2024-05-15 00:52:34.100366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.044 [2024-05-15 00:52:34.100392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:47.044 [2024-05-15 00:52:34.100415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.044 [2024-05-15 00:52:34.100432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:47.044 [2024-05-15 00:52:34.100805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.044 [2024-05-15 00:52:34.100830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:47.044 [2024-05-15 00:52:34.100853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.044 [2024-05-15 00:52:34.100870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:47.303 passed 00:15:47.303 Test: blockdev nvme passthru rw ...passed 00:15:47.303 Test: blockdev nvme passthru vendor specific ...[2024-05-15 00:52:34.185359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.303 [2024-05-15 00:52:34.185389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:47.303 [2024-05-15 00:52:34.185618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.303 [2024-05-15 00:52:34.185642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:47.303 [2024-05-15 00:52:34.185868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.303 [2024-05-15 00:52:34.185891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:47.303 [2024-05-15 00:52:34.186124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.303 [2024-05-15 00:52:34.186148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:47.303 passed 00:15:47.303 Test: blockdev nvme admin passthru ...passed 00:15:47.303 Test: blockdev copy ...passed 00:15:47.303 00:15:47.303 Run Summary: Type Total Ran Passed Failed Inactive 00:15:47.303 suites 1 1 n/a 0 0 00:15:47.303 tests 23 23 23 0 0 00:15:47.303 asserts 152 152 152 0 n/a 00:15:47.303 00:15:47.303 Elapsed time = 1.086 seconds 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.562 rmmod nvme_tcp 00:15:47.562 rmmod nvme_fabrics 00:15:47.562 rmmod nvme_keyring 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4016101 ']' 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4016101 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 4016101 ']' 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 4016101 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4016101 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4016101' 00:15:47.562 killing process with pid 4016101 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 4016101 00:15:47.562 [2024-05-15 00:52:34.543924] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:47.562 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 4016101 00:15:47.821 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:47.821 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:47.821 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:47.821 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.821 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.821 00:52:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.821 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.821 00:52:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.364 00:15:50.364 real 0m5.971s 00:15:50.364 user 0m10.363s 00:15:50.364 sys 0m1.741s 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:50.364 ************************************ 00:15:50.364 END TEST nvmf_bdevio 00:15:50.364 ************************************ 00:15:50.364 00:52:36 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:15:50.364 00:52:36 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:50.364 00:52:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:50.364 00:52:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:50.364 00:52:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.364 ************************************ 00:15:50.364 START TEST nvmf_bdevio_no_huge 00:15:50.364 ************************************ 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:50.364 * Looking for test storage... 00:15:50.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.364 00:52:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:51.742 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:51.742 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:51.742 Found net devices under 0000:08:00.0: cvl_0_0 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:51.742 Found net devices under 0000:08:00.1: cvl_0_1 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:51.742 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:51.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:15:51.743 00:15:51.743 --- 10.0.0.2 ping statistics --- 00:15:51.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.743 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:15:51.743 00:15:51.743 --- 10.0.0.1 ping statistics --- 00:15:51.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.743 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4017729 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4017729 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 4017729 ']' 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:51.743 00:52:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:51.743 [2024-05-15 00:52:38.756958] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:15:51.743 [2024-05-15 00:52:38.757050] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:52.001 [2024-05-15 00:52:38.829393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.001 [2024-05-15 00:52:38.948710] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.001 [2024-05-15 00:52:38.948771] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.001 [2024-05-15 00:52:38.948786] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.001 [2024-05-15 00:52:38.948799] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.001 [2024-05-15 00:52:38.948815] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.001 [2024-05-15 00:52:38.948923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:52.001 [2024-05-15 00:52:38.949126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:52.001 [2024-05-15 00:52:38.949032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:52.001 [2024-05-15 00:52:38.949129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.001 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:52.001 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:15:52.001 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.001 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:52.001 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:52.260 [2024-05-15 00:52:39.076100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:52.260 Malloc0 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:52.260 [2024-05-15 00:52:39.114510] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:52.260 [2024-05-15 00:52:39.114764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:52.260 { 00:15:52.260 "params": { 00:15:52.260 "name": "Nvme$subsystem", 00:15:52.260 "trtype": "$TEST_TRANSPORT", 00:15:52.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.260 "adrfam": "ipv4", 00:15:52.260 "trsvcid": "$NVMF_PORT", 00:15:52.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.260 "hdgst": ${hdgst:-false}, 00:15:52.260 "ddgst": ${ddgst:-false} 00:15:52.260 }, 00:15:52.260 "method": "bdev_nvme_attach_controller" 00:15:52.260 } 00:15:52.260 EOF 00:15:52.260 )") 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:15:52.260 00:52:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:52.260 "params": { 00:15:52.260 "name": "Nvme1", 00:15:52.260 "trtype": "tcp", 00:15:52.260 "traddr": "10.0.0.2", 00:15:52.260 "adrfam": "ipv4", 00:15:52.260 "trsvcid": "4420", 00:15:52.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.260 "hdgst": false, 00:15:52.260 "ddgst": false 00:15:52.260 }, 00:15:52.260 "method": "bdev_nvme_attach_controller" 00:15:52.260 }' 00:15:52.260 [2024-05-15 00:52:39.161242] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:15:52.260 [2024-05-15 00:52:39.161343] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4017841 ] 00:15:52.260 [2024-05-15 00:52:39.226214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.518 [2024-05-15 00:52:39.349155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.518 [2024-05-15 00:52:39.349233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.518 [2024-05-15 00:52:39.349268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.518 I/O targets: 00:15:52.518 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:52.518 00:15:52.518 00:15:52.518 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.518 http://cunit.sourceforge.net/ 00:15:52.518 00:15:52.518 00:15:52.518 Suite: bdevio tests on: Nvme1n1 00:15:52.518 Test: blockdev write read block ...passed 00:15:52.776 Test: blockdev write zeroes read block ...passed 00:15:52.776 Test: blockdev write zeroes read no split ...passed 00:15:52.776 Test: blockdev write zeroes read split ...passed 00:15:52.776 Test: blockdev write zeroes read split partial ...passed 00:15:52.776 Test: blockdev reset ...[2024-05-15 00:52:39.650799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:52.776 [2024-05-15 00:52:39.650949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16daf70 (9): Bad file descriptor 00:15:52.776 [2024-05-15 00:52:39.708667] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:52.776 passed 00:15:52.776 Test: blockdev write read 8 blocks ...passed 00:15:52.776 Test: blockdev write read size > 128k ...passed 00:15:52.776 Test: blockdev write read invalid size ...passed 00:15:52.776 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:52.776 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:52.776 Test: blockdev write read max offset ...passed 00:15:53.034 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.034 Test: blockdev writev readv 8 blocks ...passed 00:15:53.034 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.034 Test: blockdev writev readv block ...passed 00:15:53.034 Test: blockdev writev readv size > 128k ...passed 00:15:53.034 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.034 Test: blockdev comparev and writev ...[2024-05-15 00:52:39.887762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:53.034 [2024-05-15 00:52:39.887806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.887833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:53.034 [2024-05-15 00:52:39.887851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.888272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:53.034 [2024-05-15 00:52:39.888298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.888322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:53.034 [2024-05-15 00:52:39.888339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.888745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:53.034 [2024-05-15 00:52:39.888770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.888793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:53.034 [2024-05-15 00:52:39.888810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.889220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:53.034 [2024-05-15 00:52:39.889246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.889269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:53.034 [2024-05-15 00:52:39.889285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:53.034 passed 00:15:53.034 Test: blockdev nvme passthru rw ...passed 00:15:53.034 Test: blockdev nvme passthru vendor specific ...[2024-05-15 00:52:39.972366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:53.034 [2024-05-15 00:52:39.972394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.972617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:53.034 [2024-05-15 00:52:39.972643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.972862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:53.034 [2024-05-15 00:52:39.972885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:53.034 [2024-05-15 00:52:39.973114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:53.034 [2024-05-15 00:52:39.973138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:53.034 passed 00:15:53.034 Test: blockdev nvme admin passthru ...passed 00:15:53.034 Test: blockdev copy ...passed 00:15:53.034 00:15:53.034 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.034 suites 1 1 n/a 0 0 00:15:53.034 tests 23 23 23 0 0 00:15:53.034 asserts 152 152 152 0 n/a 00:15:53.034 00:15:53.034 Elapsed time = 1.041 seconds 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:53.600 rmmod nvme_tcp 00:15:53.600 rmmod nvme_fabrics 00:15:53.600 rmmod nvme_keyring 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4017729 ']' 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4017729 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 4017729 ']' 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 4017729 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4017729 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4017729' 00:15:53.600 killing process with pid 4017729 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 4017729 00:15:53.600 [2024-05-15 00:52:40.479317] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:53.600 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 4017729 00:15:53.858 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:53.858 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:53.858 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:53.858 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.858 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.858 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.858 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.858 00:52:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.395 00:52:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:56.395 00:15:56.395 real 0m6.048s 00:15:56.395 user 0m9.895s 00:15:56.395 sys 0m2.201s 00:15:56.395 00:52:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:56.395 00:52:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:56.395 ************************************ 00:15:56.395 END TEST nvmf_bdevio_no_huge 00:15:56.395 ************************************ 00:15:56.395 00:52:42 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:56.395 00:52:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:56.395 00:52:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:56.395 00:52:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:56.395 ************************************ 00:15:56.395 START TEST nvmf_tls 00:15:56.395 ************************************ 00:15:56.395 00:52:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:56.395 * Looking for test storage... 00:15:56.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:15:56.395 00:52:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.774 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:57.775 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:57.775 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:57.775 Found net devices under 0000:08:00.0: cvl_0_0 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:57.775 Found net devices under 0000:08:00.1: cvl_0_1 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:57.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:15:57.775 00:15:57.775 --- 10.0.0.2 ping statistics --- 00:15:57.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.775 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:15:57.775 00:15:57.775 --- 10.0.0.1 ping statistics --- 00:15:57.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.775 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.775 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4019443 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4019443 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4019443 ']' 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:58.032 00:52:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.032 [2024-05-15 00:52:44.894982] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:15:58.032 [2024-05-15 00:52:44.895072] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.032 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.033 [2024-05-15 00:52:44.960794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.033 [2024-05-15 00:52:45.076161] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.033 [2024-05-15 00:52:45.076224] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.033 [2024-05-15 00:52:45.076239] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.033 [2024-05-15 00:52:45.076251] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.033 [2024-05-15 00:52:45.076263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.033 [2024-05-15 00:52:45.076299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.290 00:52:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:58.290 00:52:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:58.290 00:52:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.290 00:52:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.290 00:52:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.290 00:52:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.290 00:52:45 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:58.290 00:52:45 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:58.549 true 00:15:58.549 00:52:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:58.549 00:52:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:58.807 00:52:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:58.807 00:52:45 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:58.807 00:52:45 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:59.065 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:59.065 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:59.323 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:59.323 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:59.323 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:59.581 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:59.581 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:00.147 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:00.147 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:00.147 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.147 00:52:46 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:00.405 00:52:47 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:00.405 00:52:47 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:00.405 00:52:47 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:00.661 00:52:47 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.661 00:52:47 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:00.918 00:52:47 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:00.918 00:52:47 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:00.918 00:52:47 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:01.175 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:01.175 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:01.432 00:52:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.acxocC8CeT 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.N2ugsRLZ08 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.acxocC8CeT 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.N2ugsRLZ08 00:16:01.690 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:01.948 00:52:48 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:02.207 00:52:49 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.acxocC8CeT 00:16:02.207 00:52:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.acxocC8CeT 00:16:02.207 00:52:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:02.466 [2024-05-15 00:52:49.449196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.466 00:52:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:02.724 00:52:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:02.982 [2024-05-15 00:52:50.026704] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:02.982 [2024-05-15 00:52:50.026817] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:02.982 [2024-05-15 00:52:50.027036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.241 00:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:03.505 malloc0 00:16:03.505 00:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:03.765 00:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.acxocC8CeT 00:16:04.022 [2024-05-15 00:52:50.902904] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:04.022 00:52:50 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.acxocC8CeT 00:16:04.022 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.987 Initializing NVMe Controllers 00:16:13.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:13.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:13.987 Initialization complete. Launching workers. 00:16:13.987 ======================================================== 00:16:13.987 Latency(us) 00:16:13.987 Device Information : IOPS MiB/s Average min max 00:16:13.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7376.80 28.82 8678.81 1392.93 10831.22 00:16:13.987 ======================================================== 00:16:13.987 Total : 7376.80 28.82 8678.81 1392.93 10831.22 00:16:13.987 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.acxocC8CeT 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.acxocC8CeT' 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4020903 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4020903 /var/tmp/bdevperf.sock 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4020903 ']' 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:13.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:13.987 00:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.245 [2024-05-15 00:53:01.086475] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:14.245 [2024-05-15 00:53:01.086578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020903 ] 00:16:14.245 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.245 [2024-05-15 00:53:01.147114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.245 [2024-05-15 00:53:01.264007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.503 00:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:14.503 00:53:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:14.503 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.acxocC8CeT 00:16:14.761 [2024-05-15 00:53:01.636540] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:14.761 [2024-05-15 00:53:01.636670] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:14.761 TLSTESTn1 00:16:14.761 00:53:01 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:15.019 Running I/O for 10 seconds... 00:16:25.075 00:16:25.075 Latency(us) 00:16:25.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.075 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:25.075 Verification LBA range: start 0x0 length 0x2000 00:16:25.075 TLSTESTn1 : 10.05 2369.10 9.25 0.00 0.00 53873.95 7136.14 86992.97 00:16:25.075 =================================================================================================================== 00:16:25.075 Total : 2369.10 9.25 0.00 0.00 53873.95 7136.14 86992.97 00:16:25.075 0 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4020903 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4020903 ']' 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4020903 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4020903 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4020903' 00:16:25.075 killing process with pid 4020903 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4020903 00:16:25.075 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.075 00:16:25.075 Latency(us) 00:16:25.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.075 =================================================================================================================== 00:16:25.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.075 [2024-05-15 00:53:11.965020] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:25.075 00:53:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4020903 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N2ugsRLZ08 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N2ugsRLZ08 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N2ugsRLZ08 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.N2ugsRLZ08' 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4021921 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4021921 /var/tmp/bdevperf.sock 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4021921 ']' 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.332 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.332 [2024-05-15 00:53:12.230507] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:25.332 [2024-05-15 00:53:12.230607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021921 ] 00:16:25.332 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.332 [2024-05-15 00:53:12.291501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.589 [2024-05-15 00:53:12.408146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.589 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:25.589 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:25.589 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N2ugsRLZ08 00:16:25.847 [2024-05-15 00:53:12.779629] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:25.847 [2024-05-15 00:53:12.779764] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:25.847 [2024-05-15 00:53:12.785620] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:25.847 [2024-05-15 00:53:12.786110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14358a0 (107): Transport endpoint is not connected 00:16:25.847 [2024-05-15 00:53:12.787098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14358a0 (9): Bad file descriptor 00:16:25.847 [2024-05-15 00:53:12.788097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:25.847 [2024-05-15 00:53:12.788120] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:25.847 [2024-05-15 00:53:12.788141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:25.847 request: 00:16:25.847 { 00:16:25.847 "name": "TLSTEST", 00:16:25.847 "trtype": "tcp", 00:16:25.847 "traddr": "10.0.0.2", 00:16:25.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.847 "adrfam": "ipv4", 00:16:25.847 "trsvcid": "4420", 00:16:25.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.847 "psk": "/tmp/tmp.N2ugsRLZ08", 00:16:25.847 "method": "bdev_nvme_attach_controller", 00:16:25.847 "req_id": 1 00:16:25.847 } 00:16:25.847 Got JSON-RPC error response 00:16:25.847 response: 00:16:25.847 { 00:16:25.847 "code": -32602, 00:16:25.847 "message": "Invalid parameters" 00:16:25.847 } 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4021921 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4021921 ']' 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4021921 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4021921 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4021921' 00:16:25.847 killing process with pid 4021921 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4021921 00:16:25.847 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.847 00:16:25.847 Latency(us) 00:16:25.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.847 =================================================================================================================== 00:16:25.847 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:25.847 [2024-05-15 00:53:12.838361] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:25.847 00:53:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4021921 00:16:26.105 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:26.105 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:26.105 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:26.105 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:26.105 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.acxocC8CeT 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.acxocC8CeT 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.acxocC8CeT 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.acxocC8CeT' 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4022018 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4022018 /var/tmp/bdevperf.sock 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4022018 ']' 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:26.106 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:26.106 [2024-05-15 00:53:13.100060] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:26.106 [2024-05-15 00:53:13.100159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022018 ] 00:16:26.106 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.364 [2024-05-15 00:53:13.164372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.364 [2024-05-15 00:53:13.284559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.364 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.364 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:26.364 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.acxocC8CeT 00:16:26.620 [2024-05-15 00:53:13.661765] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:26.620 [2024-05-15 00:53:13.661891] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:26.620 [2024-05-15 00:53:13.667490] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:26.620 [2024-05-15 00:53:13.667524] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:26.620 [2024-05-15 00:53:13.667569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:26.620 [2024-05-15 00:53:13.668103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1b8a0 (107): Transport endpoint is not connected 00:16:26.620 [2024-05-15 00:53:13.669097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1b8a0 (9): Bad file descriptor 00:16:26.620 [2024-05-15 00:53:13.670090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:26.620 [2024-05-15 00:53:13.670119] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:26.620 [2024-05-15 00:53:13.670139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:26.620 request: 00:16:26.620 { 00:16:26.620 "name": "TLSTEST", 00:16:26.620 "trtype": "tcp", 00:16:26.620 "traddr": "10.0.0.2", 00:16:26.620 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:26.620 "adrfam": "ipv4", 00:16:26.620 "trsvcid": "4420", 00:16:26.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.620 "psk": "/tmp/tmp.acxocC8CeT", 00:16:26.620 "method": "bdev_nvme_attach_controller", 00:16:26.620 "req_id": 1 00:16:26.620 } 00:16:26.620 Got JSON-RPC error response 00:16:26.620 response: 00:16:26.620 { 00:16:26.620 "code": -32602, 00:16:26.620 "message": "Invalid parameters" 00:16:26.620 } 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4022018 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4022018 ']' 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4022018 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4022018 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4022018' 00:16:26.878 killing process with pid 4022018 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4022018 00:16:26.878 Received shutdown signal, test time was about 10.000000 seconds 00:16:26.878 00:16:26.878 Latency(us) 00:16:26.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.878 =================================================================================================================== 00:16:26.878 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:26.878 [2024-05-15 00:53:13.721034] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4022018 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.acxocC8CeT 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.acxocC8CeT 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.acxocC8CeT 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.acxocC8CeT' 00:16:26.878 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4022125 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4022125 /var/tmp/bdevperf.sock 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4022125 ']' 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.136 00:53:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.136 [2024-05-15 00:53:13.979997] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:27.136 [2024-05-15 00:53:13.980098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022125 ] 00:16:27.136 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.136 [2024-05-15 00:53:14.040436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.136 [2024-05-15 00:53:14.157450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.394 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:27.394 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:27.394 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.acxocC8CeT 00:16:27.652 [2024-05-15 00:53:14.538188] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:27.652 [2024-05-15 00:53:14.538317] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:27.652 [2024-05-15 00:53:14.549073] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:27.652 [2024-05-15 00:53:14.549107] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:27.652 [2024-05-15 00:53:14.549151] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:27.652 [2024-05-15 00:53:14.549706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd218a0 (107): Transport endpoint is not connected 00:16:27.652 [2024-05-15 00:53:14.550703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd218a0 (9): Bad file descriptor 00:16:27.652 [2024-05-15 00:53:14.551704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:27.652 [2024-05-15 00:53:14.551725] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:27.652 [2024-05-15 00:53:14.551745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:27.652 request: 00:16:27.652 { 00:16:27.652 "name": "TLSTEST", 00:16:27.652 "trtype": "tcp", 00:16:27.652 "traddr": "10.0.0.2", 00:16:27.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.652 "adrfam": "ipv4", 00:16:27.652 "trsvcid": "4420", 00:16:27.652 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:27.652 "psk": "/tmp/tmp.acxocC8CeT", 00:16:27.652 "method": "bdev_nvme_attach_controller", 00:16:27.652 "req_id": 1 00:16:27.652 } 00:16:27.652 Got JSON-RPC error response 00:16:27.652 response: 00:16:27.652 { 00:16:27.652 "code": -32602, 00:16:27.652 "message": "Invalid parameters" 00:16:27.652 } 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4022125 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4022125 ']' 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4022125 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4022125 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4022125' 00:16:27.652 killing process with pid 4022125 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4022125 00:16:27.652 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.652 00:16:27.652 Latency(us) 00:16:27.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.652 =================================================================================================================== 00:16:27.652 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.652 [2024-05-15 00:53:14.598434] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:27.652 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4022125 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4022229 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4022229 /var/tmp/bdevperf.sock 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4022229 ']' 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.910 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.911 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.911 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.911 00:53:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.911 [2024-05-15 00:53:14.859487] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:27.911 [2024-05-15 00:53:14.859584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022229 ] 00:16:27.911 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.911 [2024-05-15 00:53:14.920384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.169 [2024-05-15 00:53:15.040198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.169 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:28.169 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:28.169 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:28.427 [2024-05-15 00:53:15.431518] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:28.427 [2024-05-15 00:53:15.432868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180c390 (9): Bad file descriptor 00:16:28.427 [2024-05-15 00:53:15.433863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:28.427 [2024-05-15 00:53:15.433886] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:28.427 [2024-05-15 00:53:15.433906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:28.427 request: 00:16:28.427 { 00:16:28.427 "name": "TLSTEST", 00:16:28.427 "trtype": "tcp", 00:16:28.427 "traddr": "10.0.0.2", 00:16:28.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.427 "adrfam": "ipv4", 00:16:28.427 "trsvcid": "4420", 00:16:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.427 "method": "bdev_nvme_attach_controller", 00:16:28.427 "req_id": 1 00:16:28.427 } 00:16:28.427 Got JSON-RPC error response 00:16:28.427 response: 00:16:28.427 { 00:16:28.427 "code": -32602, 00:16:28.427 "message": "Invalid parameters" 00:16:28.427 } 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4022229 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4022229 ']' 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4022229 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4022229 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4022229' 00:16:28.427 killing process with pid 4022229 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4022229 00:16:28.427 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.427 00:16:28.427 Latency(us) 00:16:28.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.427 =================================================================================================================== 00:16:28.427 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:28.427 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4022229 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4019443 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4019443 ']' 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4019443 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4019443 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4019443' 00:16:28.686 killing process with pid 4019443 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4019443 00:16:28.686 [2024-05-15 00:53:15.706123] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:28.686 [2024-05-15 00:53:15.706182] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:28.686 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4019443 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.BY84opoHNX 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.BY84opoHNX 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4022349 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4022349 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4022349 ']' 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:28.944 00:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.202 [2024-05-15 00:53:16.042454] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:29.202 [2024-05-15 00:53:16.042552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.202 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.202 [2024-05-15 00:53:16.107479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.202 [2024-05-15 00:53:16.225790] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.202 [2024-05-15 00:53:16.225856] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.202 [2024-05-15 00:53:16.225872] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.202 [2024-05-15 00:53:16.225884] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.202 [2024-05-15 00:53:16.225896] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.202 [2024-05-15 00:53:16.225927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.460 00:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:29.460 00:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:29.460 00:53:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.460 00:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.460 00:53:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.460 00:53:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.460 00:53:16 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.BY84opoHNX 00:16:29.460 00:53:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BY84opoHNX 00:16:29.460 00:53:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:29.718 [2024-05-15 00:53:16.638442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.718 00:53:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:29.976 00:53:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:30.234 [2024-05-15 00:53:17.223971] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:30.234 [2024-05-15 00:53:17.224084] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:30.234 [2024-05-15 00:53:17.224271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.234 00:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:30.492 malloc0 00:16:30.492 00:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:30.751 00:53:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BY84opoHNX 00:16:31.009 [2024-05-15 00:53:17.992588] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BY84opoHNX 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BY84opoHNX' 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4022562 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4022562 /var/tmp/bdevperf.sock 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4022562 ']' 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.010 00:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.010 [2024-05-15 00:53:18.055673] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:31.010 [2024-05-15 00:53:18.055763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022562 ] 00:16:31.268 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.268 [2024-05-15 00:53:18.110401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.268 [2024-05-15 00:53:18.228705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.526 00:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.526 00:53:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:31.526 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BY84opoHNX 00:16:31.526 [2024-05-15 00:53:18.548816] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:31.526 [2024-05-15 00:53:18.548949] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:31.784 TLSTESTn1 00:16:31.784 00:53:18 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:31.784 Running I/O for 10 seconds... 00:16:41.756 00:16:41.756 Latency(us) 00:16:41.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.756 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:41.756 Verification LBA range: start 0x0 length 0x2000 00:16:41.756 TLSTESTn1 : 10.05 2532.84 9.89 0.00 0.00 50391.27 7184.69 81167.55 00:16:41.756 =================================================================================================================== 00:16:41.756 Total : 2532.84 9.89 0.00 0.00 50391.27 7184.69 81167.55 00:16:41.756 0 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4022562 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4022562 ']' 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4022562 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4022562 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4022562' 00:16:42.014 killing process with pid 4022562 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4022562 00:16:42.014 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.014 00:16:42.014 Latency(us) 00:16:42.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.014 =================================================================================================================== 00:16:42.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.014 [2024-05-15 00:53:28.855039] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:42.014 00:53:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4022562 00:16:42.014 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.BY84opoHNX 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BY84opoHNX 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BY84opoHNX 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BY84opoHNX 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BY84opoHNX' 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4023568 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4023568 /var/tmp/bdevperf.sock 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4023568 ']' 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:42.273 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.273 [2024-05-15 00:53:29.124604] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:42.273 [2024-05-15 00:53:29.124699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4023568 ] 00:16:42.273 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.273 [2024-05-15 00:53:29.185391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.273 [2024-05-15 00:53:29.303493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.532 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:42.532 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:42.532 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BY84opoHNX 00:16:42.791 [2024-05-15 00:53:29.658044] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:42.791 [2024-05-15 00:53:29.658127] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:42.791 [2024-05-15 00:53:29.658143] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.BY84opoHNX 00:16:42.791 request: 00:16:42.792 { 00:16:42.792 "name": "TLSTEST", 00:16:42.792 "trtype": "tcp", 00:16:42.792 "traddr": "10.0.0.2", 00:16:42.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.792 "adrfam": "ipv4", 00:16:42.792 "trsvcid": "4420", 00:16:42.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.792 "psk": "/tmp/tmp.BY84opoHNX", 00:16:42.792 "method": "bdev_nvme_attach_controller", 00:16:42.792 "req_id": 1 00:16:42.792 } 00:16:42.792 Got JSON-RPC error response 00:16:42.792 response: 00:16:42.792 { 00:16:42.792 "code": -1, 00:16:42.792 "message": "Operation not permitted" 00:16:42.792 } 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4023568 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4023568 ']' 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4023568 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4023568 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4023568' 00:16:42.792 killing process with pid 4023568 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4023568 00:16:42.792 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.792 00:16:42.792 Latency(us) 00:16:42.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.792 =================================================================================================================== 00:16:42.792 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:42.792 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4023568 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4022349 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4022349 ']' 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4022349 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4022349 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4022349' 00:16:43.050 killing process with pid 4022349 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4022349 00:16:43.050 [2024-05-15 00:53:29.941836] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:43.050 [2024-05-15 00:53:29.941884] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:43.050 00:53:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4022349 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4023685 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4023685 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4023685 ']' 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:43.308 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.308 [2024-05-15 00:53:30.224977] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:43.308 [2024-05-15 00:53:30.225078] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.308 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.308 [2024-05-15 00:53:30.290767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.566 [2024-05-15 00:53:30.409136] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.566 [2024-05-15 00:53:30.409201] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.566 [2024-05-15 00:53:30.409217] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.566 [2024-05-15 00:53:30.409231] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.566 [2024-05-15 00:53:30.409242] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.566 [2024-05-15 00:53:30.409280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.BY84opoHNX 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BY84opoHNX 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.BY84opoHNX 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BY84opoHNX 00:16:43.566 00:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:43.824 [2024-05-15 00:53:30.817689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.824 00:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:44.089 00:53:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:44.348 [2024-05-15 00:53:31.347048] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:44.348 [2024-05-15 00:53:31.347148] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:44.348 [2024-05-15 00:53:31.347336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.348 00:53:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:44.606 malloc0 00:16:44.606 00:53:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:44.865 00:53:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BY84opoHNX 00:16:45.124 [2024-05-15 00:53:32.091532] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:45.124 [2024-05-15 00:53:32.091576] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:45.124 [2024-05-15 00:53:32.091617] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:45.124 request: 00:16:45.124 { 00:16:45.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.124 "host": "nqn.2016-06.io.spdk:host1", 00:16:45.124 "psk": "/tmp/tmp.BY84opoHNX", 00:16:45.124 "method": "nvmf_subsystem_add_host", 00:16:45.124 "req_id": 1 00:16:45.124 } 00:16:45.124 Got JSON-RPC error response 00:16:45.124 response: 00:16:45.124 { 00:16:45.124 "code": -32603, 00:16:45.124 "message": "Internal error" 00:16:45.124 } 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4023685 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4023685 ']' 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4023685 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4023685 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4023685' 00:16:45.124 killing process with pid 4023685 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4023685 00:16:45.124 [2024-05-15 00:53:32.134306] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:45.124 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4023685 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.BY84opoHNX 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4023917 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4023917 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4023917 ']' 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:45.384 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.384 [2024-05-15 00:53:32.409688] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:45.384 [2024-05-15 00:53:32.409774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.384 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.642 [2024-05-15 00:53:32.470588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.642 [2024-05-15 00:53:32.588392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.642 [2024-05-15 00:53:32.588456] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.642 [2024-05-15 00:53:32.588472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.642 [2024-05-15 00:53:32.588485] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.642 [2024-05-15 00:53:32.588496] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.642 [2024-05-15 00:53:32.588527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.642 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:45.642 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:45.643 00:53:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.643 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.643 00:53:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.901 00:53:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.901 00:53:32 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.BY84opoHNX 00:16:45.901 00:53:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BY84opoHNX 00:16:45.901 00:53:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:46.172 [2024-05-15 00:53:32.992797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.172 00:53:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:46.430 00:53:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:46.688 [2024-05-15 00:53:33.574323] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:46.688 [2024-05-15 00:53:33.574443] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:46.688 [2024-05-15 00:53:33.574628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.688 00:53:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:46.946 malloc0 00:16:46.946 00:53:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:47.205 00:53:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BY84opoHNX 00:16:47.463 [2024-05-15 00:53:34.326864] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4024134 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4024134 /var/tmp/bdevperf.sock 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4024134 ']' 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:47.463 00:53:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.463 [2024-05-15 00:53:34.390014] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:47.464 [2024-05-15 00:53:34.390102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4024134 ] 00:16:47.464 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.464 [2024-05-15 00:53:34.443949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.722 [2024-05-15 00:53:34.561624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.722 00:53:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:47.722 00:53:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:47.722 00:53:34 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BY84opoHNX 00:16:47.980 [2024-05-15 00:53:34.886914] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:47.980 [2024-05-15 00:53:34.887073] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:47.980 TLSTESTn1 00:16:47.980 00:53:34 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:16:48.547 00:53:35 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:48.547 "subsystems": [ 00:16:48.547 { 00:16:48.547 "subsystem": "keyring", 00:16:48.547 "config": [] 00:16:48.547 }, 00:16:48.547 { 00:16:48.547 "subsystem": "iobuf", 00:16:48.547 "config": [ 00:16:48.547 { 00:16:48.547 "method": "iobuf_set_options", 00:16:48.547 "params": { 00:16:48.547 "small_pool_count": 8192, 00:16:48.547 "large_pool_count": 1024, 00:16:48.547 "small_bufsize": 8192, 00:16:48.547 "large_bufsize": 135168 00:16:48.547 } 00:16:48.547 } 00:16:48.547 ] 00:16:48.547 }, 00:16:48.547 { 00:16:48.547 "subsystem": "sock", 00:16:48.547 "config": [ 00:16:48.547 { 00:16:48.547 "method": "sock_impl_set_options", 00:16:48.547 "params": { 00:16:48.547 "impl_name": "posix", 00:16:48.547 "recv_buf_size": 2097152, 00:16:48.547 "send_buf_size": 2097152, 00:16:48.547 "enable_recv_pipe": true, 00:16:48.547 "enable_quickack": false, 00:16:48.547 "enable_placement_id": 0, 00:16:48.547 "enable_zerocopy_send_server": true, 00:16:48.547 "enable_zerocopy_send_client": false, 00:16:48.547 "zerocopy_threshold": 0, 00:16:48.547 "tls_version": 0, 00:16:48.547 "enable_ktls": false 00:16:48.547 } 00:16:48.547 }, 00:16:48.547 { 00:16:48.547 "method": "sock_impl_set_options", 00:16:48.547 "params": { 00:16:48.547 "impl_name": "ssl", 00:16:48.547 "recv_buf_size": 4096, 00:16:48.547 "send_buf_size": 4096, 00:16:48.547 "enable_recv_pipe": true, 00:16:48.547 "enable_quickack": false, 00:16:48.547 "enable_placement_id": 0, 00:16:48.547 "enable_zerocopy_send_server": true, 00:16:48.547 "enable_zerocopy_send_client": false, 00:16:48.547 "zerocopy_threshold": 0, 00:16:48.547 "tls_version": 0, 00:16:48.547 "enable_ktls": false 00:16:48.547 } 00:16:48.547 } 00:16:48.547 ] 00:16:48.547 }, 00:16:48.547 { 00:16:48.547 "subsystem": "vmd", 00:16:48.547 "config": [] 00:16:48.547 }, 00:16:48.547 { 00:16:48.547 "subsystem": "accel", 00:16:48.547 "config": [ 00:16:48.547 { 00:16:48.547 "method": "accel_set_options", 00:16:48.547 "params": { 00:16:48.547 "small_cache_size": 128, 00:16:48.547 "large_cache_size": 16, 00:16:48.547 "task_count": 2048, 00:16:48.547 "sequence_count": 2048, 00:16:48.547 "buf_count": 2048 00:16:48.547 } 00:16:48.547 } 00:16:48.547 ] 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "subsystem": "bdev", 00:16:48.548 "config": [ 00:16:48.548 { 00:16:48.548 "method": "bdev_set_options", 00:16:48.548 "params": { 00:16:48.548 "bdev_io_pool_size": 65535, 00:16:48.548 "bdev_io_cache_size": 256, 00:16:48.548 "bdev_auto_examine": true, 00:16:48.548 "iobuf_small_cache_size": 128, 00:16:48.548 "iobuf_large_cache_size": 16 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "bdev_raid_set_options", 00:16:48.548 "params": { 00:16:48.548 "process_window_size_kb": 1024 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "bdev_iscsi_set_options", 00:16:48.548 "params": { 00:16:48.548 "timeout_sec": 30 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "bdev_nvme_set_options", 00:16:48.548 "params": { 00:16:48.548 "action_on_timeout": "none", 00:16:48.548 "timeout_us": 0, 00:16:48.548 "timeout_admin_us": 0, 00:16:48.548 "keep_alive_timeout_ms": 10000, 00:16:48.548 "arbitration_burst": 0, 00:16:48.548 "low_priority_weight": 0, 00:16:48.548 "medium_priority_weight": 0, 00:16:48.548 "high_priority_weight": 0, 00:16:48.548 "nvme_adminq_poll_period_us": 10000, 00:16:48.548 "nvme_ioq_poll_period_us": 0, 00:16:48.548 "io_queue_requests": 0, 00:16:48.548 "delay_cmd_submit": true, 00:16:48.548 "transport_retry_count": 4, 00:16:48.548 "bdev_retry_count": 3, 00:16:48.548 "transport_ack_timeout": 0, 00:16:48.548 "ctrlr_loss_timeout_sec": 0, 00:16:48.548 "reconnect_delay_sec": 0, 00:16:48.548 "fast_io_fail_timeout_sec": 0, 00:16:48.548 "disable_auto_failback": false, 00:16:48.548 "generate_uuids": false, 00:16:48.548 "transport_tos": 0, 00:16:48.548 "nvme_error_stat": false, 00:16:48.548 "rdma_srq_size": 0, 00:16:48.548 "io_path_stat": false, 00:16:48.548 "allow_accel_sequence": false, 00:16:48.548 "rdma_max_cq_size": 0, 00:16:48.548 "rdma_cm_event_timeout_ms": 0, 00:16:48.548 "dhchap_digests": [ 00:16:48.548 "sha256", 00:16:48.548 "sha384", 00:16:48.548 "sha512" 00:16:48.548 ], 00:16:48.548 "dhchap_dhgroups": [ 00:16:48.548 "null", 00:16:48.548 "ffdhe2048", 00:16:48.548 "ffdhe3072", 00:16:48.548 "ffdhe4096", 00:16:48.548 "ffdhe6144", 00:16:48.548 "ffdhe8192" 00:16:48.548 ] 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "bdev_nvme_set_hotplug", 00:16:48.548 "params": { 00:16:48.548 "period_us": 100000, 00:16:48.548 "enable": false 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "bdev_malloc_create", 00:16:48.548 "params": { 00:16:48.548 "name": "malloc0", 00:16:48.548 "num_blocks": 8192, 00:16:48.548 "block_size": 4096, 00:16:48.548 "physical_block_size": 4096, 00:16:48.548 "uuid": "5f7ece0d-c4ec-47d1-a1fd-2fafe644041c", 00:16:48.548 "optimal_io_boundary": 0 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "bdev_wait_for_examine" 00:16:48.548 } 00:16:48.548 ] 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "subsystem": "nbd", 00:16:48.548 "config": [] 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "subsystem": "scheduler", 00:16:48.548 "config": [ 00:16:48.548 { 00:16:48.548 "method": "framework_set_scheduler", 00:16:48.548 "params": { 00:16:48.548 "name": "static" 00:16:48.548 } 00:16:48.548 } 00:16:48.548 ] 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "subsystem": "nvmf", 00:16:48.548 "config": [ 00:16:48.548 { 00:16:48.548 "method": "nvmf_set_config", 00:16:48.548 "params": { 00:16:48.548 "discovery_filter": "match_any", 00:16:48.548 "admin_cmd_passthru": { 00:16:48.548 "identify_ctrlr": false 00:16:48.548 } 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "nvmf_set_max_subsystems", 00:16:48.548 "params": { 00:16:48.548 "max_subsystems": 1024 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "nvmf_set_crdt", 00:16:48.548 "params": { 00:16:48.548 "crdt1": 0, 00:16:48.548 "crdt2": 0, 00:16:48.548 "crdt3": 0 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "nvmf_create_transport", 00:16:48.548 "params": { 00:16:48.548 "trtype": "TCP", 00:16:48.548 "max_queue_depth": 128, 00:16:48.548 "max_io_qpairs_per_ctrlr": 127, 00:16:48.548 "in_capsule_data_size": 4096, 00:16:48.548 "max_io_size": 131072, 00:16:48.548 "io_unit_size": 131072, 00:16:48.548 "max_aq_depth": 128, 00:16:48.548 "num_shared_buffers": 511, 00:16:48.548 "buf_cache_size": 4294967295, 00:16:48.548 "dif_insert_or_strip": false, 00:16:48.548 "zcopy": false, 00:16:48.548 "c2h_success": false, 00:16:48.548 "sock_priority": 0, 00:16:48.548 "abort_timeout_sec": 1, 00:16:48.548 "ack_timeout": 0, 00:16:48.548 "data_wr_pool_size": 0 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "nvmf_create_subsystem", 00:16:48.548 "params": { 00:16:48.548 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.548 "allow_any_host": false, 00:16:48.548 "serial_number": "SPDK00000000000001", 00:16:48.548 "model_number": "SPDK bdev Controller", 00:16:48.548 "max_namespaces": 10, 00:16:48.548 "min_cntlid": 1, 00:16:48.548 "max_cntlid": 65519, 00:16:48.548 "ana_reporting": false 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "nvmf_subsystem_add_host", 00:16:48.548 "params": { 00:16:48.548 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.548 "host": "nqn.2016-06.io.spdk:host1", 00:16:48.548 "psk": "/tmp/tmp.BY84opoHNX" 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "nvmf_subsystem_add_ns", 00:16:48.548 "params": { 00:16:48.548 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.548 "namespace": { 00:16:48.548 "nsid": 1, 00:16:48.548 "bdev_name": "malloc0", 00:16:48.548 "nguid": "5F7ECE0DC4EC47D1A1FD2FAFE644041C", 00:16:48.548 "uuid": "5f7ece0d-c4ec-47d1-a1fd-2fafe644041c", 00:16:48.548 "no_auto_visible": false 00:16:48.548 } 00:16:48.548 } 00:16:48.548 }, 00:16:48.548 { 00:16:48.548 "method": "nvmf_subsystem_add_listener", 00:16:48.548 "params": { 00:16:48.548 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.548 "listen_address": { 00:16:48.548 "trtype": "TCP", 00:16:48.548 "adrfam": "IPv4", 00:16:48.548 "traddr": "10.0.0.2", 00:16:48.548 "trsvcid": "4420" 00:16:48.548 }, 00:16:48.548 "secure_channel": true 00:16:48.548 } 00:16:48.548 } 00:16:48.548 ] 00:16:48.548 } 00:16:48.548 ] 00:16:48.548 }' 00:16:48.548 00:53:35 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:48.808 00:53:35 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:48.808 "subsystems": [ 00:16:48.808 { 00:16:48.808 "subsystem": "keyring", 00:16:48.808 "config": [] 00:16:48.808 }, 00:16:48.808 { 00:16:48.808 "subsystem": "iobuf", 00:16:48.808 "config": [ 00:16:48.808 { 00:16:48.808 "method": "iobuf_set_options", 00:16:48.808 "params": { 00:16:48.808 "small_pool_count": 8192, 00:16:48.808 "large_pool_count": 1024, 00:16:48.808 "small_bufsize": 8192, 00:16:48.808 "large_bufsize": 135168 00:16:48.808 } 00:16:48.808 } 00:16:48.808 ] 00:16:48.808 }, 00:16:48.808 { 00:16:48.808 "subsystem": "sock", 00:16:48.808 "config": [ 00:16:48.808 { 00:16:48.808 "method": "sock_impl_set_options", 00:16:48.808 "params": { 00:16:48.808 "impl_name": "posix", 00:16:48.808 "recv_buf_size": 2097152, 00:16:48.808 "send_buf_size": 2097152, 00:16:48.808 "enable_recv_pipe": true, 00:16:48.808 "enable_quickack": false, 00:16:48.808 "enable_placement_id": 0, 00:16:48.808 "enable_zerocopy_send_server": true, 00:16:48.808 "enable_zerocopy_send_client": false, 00:16:48.808 "zerocopy_threshold": 0, 00:16:48.808 "tls_version": 0, 00:16:48.808 "enable_ktls": false 00:16:48.808 } 00:16:48.808 }, 00:16:48.808 { 00:16:48.808 "method": "sock_impl_set_options", 00:16:48.808 "params": { 00:16:48.808 "impl_name": "ssl", 00:16:48.808 "recv_buf_size": 4096, 00:16:48.808 "send_buf_size": 4096, 00:16:48.808 "enable_recv_pipe": true, 00:16:48.808 "enable_quickack": false, 00:16:48.808 "enable_placement_id": 0, 00:16:48.808 "enable_zerocopy_send_server": true, 00:16:48.808 "enable_zerocopy_send_client": false, 00:16:48.808 "zerocopy_threshold": 0, 00:16:48.808 "tls_version": 0, 00:16:48.808 "enable_ktls": false 00:16:48.808 } 00:16:48.808 } 00:16:48.808 ] 00:16:48.808 }, 00:16:48.808 { 00:16:48.808 "subsystem": "vmd", 00:16:48.808 "config": [] 00:16:48.808 }, 00:16:48.808 { 00:16:48.808 "subsystem": "accel", 00:16:48.808 "config": [ 00:16:48.808 { 00:16:48.808 "method": "accel_set_options", 00:16:48.808 "params": { 00:16:48.808 "small_cache_size": 128, 00:16:48.808 "large_cache_size": 16, 00:16:48.808 "task_count": 2048, 00:16:48.808 "sequence_count": 2048, 00:16:48.808 "buf_count": 2048 00:16:48.808 } 00:16:48.808 } 00:16:48.808 ] 00:16:48.808 }, 00:16:48.808 { 00:16:48.808 "subsystem": "bdev", 00:16:48.808 "config": [ 00:16:48.808 { 00:16:48.808 "method": "bdev_set_options", 00:16:48.808 "params": { 00:16:48.808 "bdev_io_pool_size": 65535, 00:16:48.808 "bdev_io_cache_size": 256, 00:16:48.808 "bdev_auto_examine": true, 00:16:48.808 "iobuf_small_cache_size": 128, 00:16:48.808 "iobuf_large_cache_size": 16 00:16:48.808 } 00:16:48.808 }, 00:16:48.808 { 00:16:48.808 "method": "bdev_raid_set_options", 00:16:48.808 "params": { 00:16:48.808 "process_window_size_kb": 1024 00:16:48.808 } 00:16:48.808 }, 00:16:48.808 { 00:16:48.808 "method": "bdev_iscsi_set_options", 00:16:48.808 "params": { 00:16:48.808 "timeout_sec": 30 00:16:48.808 } 00:16:48.808 }, 00:16:48.808 { 00:16:48.808 "method": "bdev_nvme_set_options", 00:16:48.808 "params": { 00:16:48.808 "action_on_timeout": "none", 00:16:48.808 "timeout_us": 0, 00:16:48.808 "timeout_admin_us": 0, 00:16:48.808 "keep_alive_timeout_ms": 10000, 00:16:48.808 "arbitration_burst": 0, 00:16:48.808 "low_priority_weight": 0, 00:16:48.808 "medium_priority_weight": 0, 00:16:48.808 "high_priority_weight": 0, 00:16:48.808 "nvme_adminq_poll_period_us": 10000, 00:16:48.808 "nvme_ioq_poll_period_us": 0, 00:16:48.808 "io_queue_requests": 512, 00:16:48.808 "delay_cmd_submit": true, 00:16:48.808 "transport_retry_count": 4, 00:16:48.808 "bdev_retry_count": 3, 00:16:48.808 "transport_ack_timeout": 0, 00:16:48.808 "ctrlr_loss_timeout_sec": 0, 00:16:48.808 "reconnect_delay_sec": 0, 00:16:48.808 "fast_io_fail_timeout_sec": 0, 00:16:48.808 "disable_auto_failback": false, 00:16:48.808 "generate_uuids": false, 00:16:48.808 "transport_tos": 0, 00:16:48.808 "nvme_error_stat": false, 00:16:48.808 "rdma_srq_size": 0, 00:16:48.808 "io_path_stat": false, 00:16:48.808 "allow_accel_sequence": false, 00:16:48.808 "rdma_max_cq_size": 0, 00:16:48.808 "rdma_cm_event_timeout_ms": 0, 00:16:48.808 "dhchap_digests": [ 00:16:48.808 "sha256", 00:16:48.808 "sha384", 00:16:48.808 "sha512" 00:16:48.808 ], 00:16:48.808 "dhchap_dhgroups": [ 00:16:48.808 "null", 00:16:48.808 "ffdhe2048", 00:16:48.808 "ffdhe3072", 00:16:48.809 "ffdhe4096", 00:16:48.809 "ffdhe6144", 00:16:48.809 "ffdhe8192" 00:16:48.809 ] 00:16:48.809 } 00:16:48.809 }, 00:16:48.809 { 00:16:48.809 "method": "bdev_nvme_attach_controller", 00:16:48.809 "params": { 00:16:48.809 "name": "TLSTEST", 00:16:48.809 "trtype": "TCP", 00:16:48.809 "adrfam": "IPv4", 00:16:48.809 "traddr": "10.0.0.2", 00:16:48.809 "trsvcid": "4420", 00:16:48.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.809 "prchk_reftag": false, 00:16:48.809 "prchk_guard": false, 00:16:48.809 "ctrlr_loss_timeout_sec": 0, 00:16:48.809 "reconnect_delay_sec": 0, 00:16:48.809 "fast_io_fail_timeout_sec": 0, 00:16:48.809 "psk": "/tmp/tmp.BY84opoHNX", 00:16:48.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:48.809 "hdgst": false, 00:16:48.809 "ddgst": false 00:16:48.809 } 00:16:48.809 }, 00:16:48.809 { 00:16:48.809 "method": "bdev_nvme_set_hotplug", 00:16:48.809 "params": { 00:16:48.809 "period_us": 100000, 00:16:48.809 "enable": false 00:16:48.809 } 00:16:48.809 }, 00:16:48.809 { 00:16:48.809 "method": "bdev_wait_for_examine" 00:16:48.809 } 00:16:48.809 ] 00:16:48.809 }, 00:16:48.809 { 00:16:48.809 "subsystem": "nbd", 00:16:48.809 "config": [] 00:16:48.809 } 00:16:48.809 ] 00:16:48.809 }' 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4024134 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4024134 ']' 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4024134 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4024134 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4024134' 00:16:48.809 killing process with pid 4024134 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4024134 00:16:48.809 Received shutdown signal, test time was about 10.000000 seconds 00:16:48.809 00:16:48.809 Latency(us) 00:16:48.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.809 =================================================================================================================== 00:16:48.809 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:48.809 [2024-05-15 00:53:35.677090] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:48.809 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4024134 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4023917 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4023917 ']' 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4023917 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4023917 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4023917' 00:16:49.068 killing process with pid 4023917 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4023917 00:16:49.068 [2024-05-15 00:53:35.921916] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:49.068 [2024-05-15 00:53:35.921992] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:49.068 00:53:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4023917 00:16:49.328 00:53:36 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:49.328 00:53:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:49.328 00:53:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:49.328 00:53:36 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:49.328 "subsystems": [ 00:16:49.328 { 00:16:49.328 "subsystem": "keyring", 00:16:49.328 "config": [] 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "subsystem": "iobuf", 00:16:49.328 "config": [ 00:16:49.328 { 00:16:49.328 "method": "iobuf_set_options", 00:16:49.328 "params": { 00:16:49.328 "small_pool_count": 8192, 00:16:49.328 "large_pool_count": 1024, 00:16:49.328 "small_bufsize": 8192, 00:16:49.328 "large_bufsize": 135168 00:16:49.328 } 00:16:49.328 } 00:16:49.328 ] 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "subsystem": "sock", 00:16:49.328 "config": [ 00:16:49.328 { 00:16:49.328 "method": "sock_impl_set_options", 00:16:49.328 "params": { 00:16:49.328 "impl_name": "posix", 00:16:49.328 "recv_buf_size": 2097152, 00:16:49.328 "send_buf_size": 2097152, 00:16:49.328 "enable_recv_pipe": true, 00:16:49.328 "enable_quickack": false, 00:16:49.328 "enable_placement_id": 0, 00:16:49.328 "enable_zerocopy_send_server": true, 00:16:49.328 "enable_zerocopy_send_client": false, 00:16:49.328 "zerocopy_threshold": 0, 00:16:49.328 "tls_version": 0, 00:16:49.328 "enable_ktls": false 00:16:49.328 } 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "method": "sock_impl_set_options", 00:16:49.328 "params": { 00:16:49.328 "impl_name": "ssl", 00:16:49.328 "recv_buf_size": 4096, 00:16:49.328 "send_buf_size": 4096, 00:16:49.328 "enable_recv_pipe": true, 00:16:49.328 "enable_quickack": false, 00:16:49.328 "enable_placement_id": 0, 00:16:49.328 "enable_zerocopy_send_server": true, 00:16:49.328 "enable_zerocopy_send_client": false, 00:16:49.328 "zerocopy_threshold": 0, 00:16:49.328 "tls_version": 0, 00:16:49.328 "enable_ktls": false 00:16:49.328 } 00:16:49.328 } 00:16:49.328 ] 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "subsystem": "vmd", 00:16:49.328 "config": [] 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "subsystem": "accel", 00:16:49.328 "config": [ 00:16:49.328 { 00:16:49.328 "method": "accel_set_options", 00:16:49.328 "params": { 00:16:49.328 "small_cache_size": 128, 00:16:49.328 "large_cache_size": 16, 00:16:49.328 "task_count": 2048, 00:16:49.328 "sequence_count": 2048, 00:16:49.328 "buf_count": 2048 00:16:49.328 } 00:16:49.328 } 00:16:49.328 ] 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "subsystem": "bdev", 00:16:49.328 "config": [ 00:16:49.328 { 00:16:49.328 "method": "bdev_set_options", 00:16:49.328 "params": { 00:16:49.328 "bdev_io_pool_size": 65535, 00:16:49.328 "bdev_io_cache_size": 256, 00:16:49.328 "bdev_auto_examine": true, 00:16:49.328 "iobuf_small_cache_size": 128, 00:16:49.328 "iobuf_large_cache_size": 16 00:16:49.328 } 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "method": "bdev_raid_set_options", 00:16:49.328 "params": { 00:16:49.328 "process_window_size_kb": 1024 00:16:49.328 } 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "method": "bdev_iscsi_set_options", 00:16:49.328 "params": { 00:16:49.328 "timeout_sec": 30 00:16:49.328 } 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "method": "bdev_nvme_set_options", 00:16:49.328 "params": { 00:16:49.328 "action_on_timeout": "none", 00:16:49.328 "timeout_us": 0, 00:16:49.328 "timeout_admin_us": 0, 00:16:49.328 "keep_alive_timeout_ms": 10000, 00:16:49.328 "arbitration_burst": 0, 00:16:49.328 "low_priority_weight": 0, 00:16:49.328 "medium_priority_weight": 0, 00:16:49.328 "high_priority_weight": 0, 00:16:49.328 "nvme_adminq_poll_period_us": 10000, 00:16:49.328 "nvme_ioq_poll_period_us": 0, 00:16:49.328 "io_queue_requests": 0, 00:16:49.328 "delay_cmd_submit": true, 00:16:49.328 "transport_retry_count": 4, 00:16:49.328 "bdev_retry_count": 3, 00:16:49.328 "transport_ack_timeout": 0, 00:16:49.328 "ctrlr_loss_timeout_sec": 0, 00:16:49.328 "reconnect_delay_sec": 0, 00:16:49.328 "fast_io_fail_timeout_sec": 0, 00:16:49.328 "disable_auto_failback": false, 00:16:49.328 "generate_uuids": false, 00:16:49.328 "transport_tos": 0, 00:16:49.328 "nvme_error_stat": false, 00:16:49.328 "rdma_srq_size": 0, 00:16:49.328 "io_path_stat": false, 00:16:49.328 "allow_accel_sequence": false, 00:16:49.328 "rdma_max_cq_size": 0, 00:16:49.328 "rdma_cm_event_timeout_ms": 0, 00:16:49.328 "dhchap_digests": [ 00:16:49.328 "sha256", 00:16:49.328 "sha384", 00:16:49.328 "sha512" 00:16:49.328 ], 00:16:49.328 "dhchap_dhgroups": [ 00:16:49.328 "null", 00:16:49.328 "ffdhe2048", 00:16:49.328 "ffdhe3072", 00:16:49.328 "ffdhe4096", 00:16:49.328 "ffdhe6144", 00:16:49.328 "ffdhe8192" 00:16:49.328 ] 00:16:49.328 } 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "method": "bdev_nvme_set_hotplug", 00:16:49.328 "params": { 00:16:49.328 "period_us": 100000, 00:16:49.328 "enable": false 00:16:49.328 } 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "method": "bdev_malloc_create", 00:16:49.328 "params": { 00:16:49.328 "name": "malloc0", 00:16:49.328 "num_blocks": 8192, 00:16:49.328 "block_size": 4096, 00:16:49.328 "physical_block_size": 4096, 00:16:49.328 "uuid": "5f7ece0d-c4ec-47d1-a1fd-2fafe644041c", 00:16:49.328 "optimal_io_boundary": 0 00:16:49.328 } 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "method": "bdev_wait_for_examine" 00:16:49.328 } 00:16:49.328 ] 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "subsystem": "nbd", 00:16:49.328 "config": [] 00:16:49.328 }, 00:16:49.328 { 00:16:49.328 "subsystem": "scheduler", 00:16:49.329 "config": [ 00:16:49.329 { 00:16:49.329 "method": "framework_set_scheduler", 00:16:49.329 "params": { 00:16:49.329 "name": "static" 00:16:49.329 } 00:16:49.329 } 00:16:49.329 ] 00:16:49.329 }, 00:16:49.329 { 00:16:49.329 "subsystem": "nvmf", 00:16:49.329 "config": [ 00:16:49.329 { 00:16:49.329 "method": "nvmf_set_config", 00:16:49.329 "params": { 00:16:49.329 "discovery_filter": "match_any", 00:16:49.329 "admin_cmd_passthru": { 00:16:49.329 "identify_ctrlr": false 00:16:49.329 } 00:16:49.329 } 00:16:49.329 }, 00:16:49.329 { 00:16:49.329 "method": "nvmf_set_max_subsystems", 00:16:49.329 "params": { 00:16:49.329 "max_subsystems": 1024 00:16:49.329 } 00:16:49.329 }, 00:16:49.329 { 00:16:49.329 "method": "nvmf_set_crdt", 00:16:49.329 "params": { 00:16:49.329 "crdt1": 0, 00:16:49.329 "crdt2": 0, 00:16:49.329 "crdt3": 0 00:16:49.329 } 00:16:49.329 }, 00:16:49.329 { 00:16:49.329 "method": "nvmf_create_transport", 00:16:49.329 "params": { 00:16:49.329 "trtype": "TCP", 00:16:49.329 "max_queue_depth": 128, 00:16:49.329 "max_io_qpairs_per_ctrlr": 127, 00:16:49.329 "in_capsule_data_size": 4096, 00:16:49.329 "max_io_size": 131072, 00:16:49.329 "io_unit_size": 131072, 00:16:49.329 "max_aq_depth": 128, 00:16:49.329 "num_shared_buffers": 511, 00:16:49.329 "buf_cache_size": 4294967295, 00:16:49.329 "dif_insert_or_strip": false, 00:16:49.329 "zcopy": false, 00:16:49.329 "c2h_success": false, 00:16:49.329 "sock_priority": 0, 00:16:49.329 "abort_timeout_sec": 1, 00:16:49.329 "ack_timeout": 0, 00:16:49.329 "data_wr_pool_size": 0 00:16:49.329 } 00:16:49.329 }, 00:16:49.329 { 00:16:49.329 "method": "nvmf_create_subsystem", 00:16:49.329 "params": { 00:16:49.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.329 "allow_any_host": false, 00:16:49.329 "serial_number": "SPDK00000000000001", 00:16:49.329 "model_number": "SPDK bdev Controller", 00:16:49.329 "max_namespaces": 10, 00:16:49.329 "min_cntlid": 1, 00:16:49.329 "max_cntlid": 65519, 00:16:49.329 "ana_reporting": false 00:16:49.329 } 00:16:49.329 }, 00:16:49.329 { 00:16:49.329 "method": "nvmf_subsystem_add_host", 00:16:49.329 "params": { 00:16:49.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.329 "host": "nqn.2016-06.io.spdk:host1", 00:16:49.329 "psk": "/tmp/tmp.BY84opoHNX" 00:16:49.329 } 00:16:49.329 }, 00:16:49.329 { 00:16:49.329 "method": "nvmf_subsystem_add_ns", 00:16:49.329 "params": { 00:16:49.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.329 "namespace": { 00:16:49.329 "nsid": 1, 00:16:49.329 "bdev_name": "malloc0", 00:16:49.329 "nguid": "5F7ECE0DC4EC47D1A1FD2FAFE644041C", 00:16:49.329 "uuid": "5f7ece0d-c4ec-47d1-a1fd-2fafe644041c", 00:16:49.329 "no_auto_visible": false 00:16:49.329 } 00:16:49.329 } 00:16:49.329 }, 00:16:49.329 { 00:16:49.329 "method": "nvmf_subsystem_add_listener", 00:16:49.329 "params": { 00:16:49.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.329 "listen_address": { 00:16:49.329 "trtype": "TCP", 00:16:49.329 "adrfam": "IPv4", 00:16:49.329 "traddr": "10.0.0.2", 00:16:49.329 "trsvcid": "4420" 00:16:49.329 }, 00:16:49.329 "secure_channel": true 00:16:49.329 } 00:16:49.329 } 00:16:49.329 ] 00:16:49.329 } 00:16:49.329 ] 00:16:49.329 }' 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4024267 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4024267 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4024267 ']' 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:49.329 00:53:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.329 [2024-05-15 00:53:36.203730] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:49.329 [2024-05-15 00:53:36.203822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.329 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.329 [2024-05-15 00:53:36.268150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.329 [2024-05-15 00:53:36.383697] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.329 [2024-05-15 00:53:36.383754] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.329 [2024-05-15 00:53:36.383770] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.329 [2024-05-15 00:53:36.383784] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.329 [2024-05-15 00:53:36.383796] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.329 [2024-05-15 00:53:36.383883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.588 [2024-05-15 00:53:36.599382] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.588 [2024-05-15 00:53:36.615335] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:49.588 [2024-05-15 00:53:36.631345] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:49.588 [2024-05-15 00:53:36.631421] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.588 [2024-05-15 00:53:36.640125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4024385 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4024385 /var/tmp/bdevperf.sock 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4024385 ']' 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.524 00:53:37 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:50.524 "subsystems": [ 00:16:50.524 { 00:16:50.524 "subsystem": "keyring", 00:16:50.524 "config": [] 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "subsystem": "iobuf", 00:16:50.524 "config": [ 00:16:50.524 { 00:16:50.524 "method": "iobuf_set_options", 00:16:50.524 "params": { 00:16:50.524 "small_pool_count": 8192, 00:16:50.524 "large_pool_count": 1024, 00:16:50.524 "small_bufsize": 8192, 00:16:50.524 "large_bufsize": 135168 00:16:50.524 } 00:16:50.524 } 00:16:50.524 ] 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "subsystem": "sock", 00:16:50.524 "config": [ 00:16:50.524 { 00:16:50.524 "method": "sock_impl_set_options", 00:16:50.524 "params": { 00:16:50.524 "impl_name": "posix", 00:16:50.524 "recv_buf_size": 2097152, 00:16:50.524 "send_buf_size": 2097152, 00:16:50.524 "enable_recv_pipe": true, 00:16:50.524 "enable_quickack": false, 00:16:50.524 "enable_placement_id": 0, 00:16:50.524 "enable_zerocopy_send_server": true, 00:16:50.524 "enable_zerocopy_send_client": false, 00:16:50.524 "zerocopy_threshold": 0, 00:16:50.524 "tls_version": 0, 00:16:50.524 "enable_ktls": false 00:16:50.524 } 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "method": "sock_impl_set_options", 00:16:50.524 "params": { 00:16:50.524 "impl_name": "ssl", 00:16:50.524 "recv_buf_size": 4096, 00:16:50.524 "send_buf_size": 4096, 00:16:50.524 "enable_recv_pipe": true, 00:16:50.524 "enable_quickack": false, 00:16:50.524 "enable_placement_id": 0, 00:16:50.524 "enable_zerocopy_send_server": true, 00:16:50.524 "enable_zerocopy_send_client": false, 00:16:50.524 "zerocopy_threshold": 0, 00:16:50.524 "tls_version": 0, 00:16:50.524 "enable_ktls": false 00:16:50.524 } 00:16:50.524 } 00:16:50.524 ] 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "subsystem": "vmd", 00:16:50.524 "config": [] 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "subsystem": "accel", 00:16:50.524 "config": [ 00:16:50.524 { 00:16:50.524 "method": "accel_set_options", 00:16:50.524 "params": { 00:16:50.524 "small_cache_size": 128, 00:16:50.524 "large_cache_size": 16, 00:16:50.524 "task_count": 2048, 00:16:50.524 "sequence_count": 2048, 00:16:50.524 "buf_count": 2048 00:16:50.524 } 00:16:50.524 } 00:16:50.524 ] 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "subsystem": "bdev", 00:16:50.524 "config": [ 00:16:50.524 { 00:16:50.524 "method": "bdev_set_options", 00:16:50.524 "params": { 00:16:50.524 "bdev_io_pool_size": 65535, 00:16:50.524 "bdev_io_cache_size": 256, 00:16:50.524 "bdev_auto_examine": true, 00:16:50.524 "iobuf_small_cache_size": 128, 00:16:50.524 "iobuf_large_cache_size": 16 00:16:50.524 } 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "method": "bdev_raid_set_options", 00:16:50.524 "params": { 00:16:50.524 "process_window_size_kb": 1024 00:16:50.524 } 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "method": "bdev_iscsi_set_options", 00:16:50.524 "params": { 00:16:50.524 "timeout_sec": 30 00:16:50.524 } 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "method": "bdev_nvme_set_options", 00:16:50.524 "params": { 00:16:50.524 "action_on_timeout": "none", 00:16:50.524 "timeout_us": 0, 00:16:50.524 "timeout_admin_us": 0, 00:16:50.524 "keep_alive_timeout_ms": 10000, 00:16:50.524 "arbitration_burst": 0, 00:16:50.524 "low_priority_weight": 0, 00:16:50.524 "medium_priority_weight": 0, 00:16:50.524 "high_priority_weight": 0, 00:16:50.524 "nvme_adminq_poll_period_us": 10000, 00:16:50.524 "nvme_ioq_poll_period_us": 0, 00:16:50.524 "io_queue_requests": 512, 00:16:50.524 "delay_cmd_submit": true, 00:16:50.524 "transport_retry_count": 4, 00:16:50.524 "bdev_retry_count": 3, 00:16:50.524 "transport_ack_timeout": 0, 00:16:50.524 "ctrlr_loss_timeout_sec": 0, 00:16:50.524 "reconnect_delay_sec": 0, 00:16:50.524 "fast_io_fail_timeout_sec": 0, 00:16:50.524 "disable_auto_failback": false, 00:16:50.524 "generate_uuids": false, 00:16:50.524 "transport_tos": 0, 00:16:50.524 "nvme_error_stat": false, 00:16:50.524 "rdma_srq_size": 0, 00:16:50.524 "io_path_stat": false, 00:16:50.524 "allow_accel_sequence": false, 00:16:50.524 "rdma_max_cq_size": 0, 00:16:50.524 "rdma_cm_event_timeout_ms": 0, 00:16:50.524 "dhchap_digests": [ 00:16:50.524 "sha256", 00:16:50.524 "sha384", 00:16:50.524 "sha512" 00:16:50.524 ], 00:16:50.524 "dhchap_dhgroups": [ 00:16:50.524 "null", 00:16:50.524 "ffdhe2048", 00:16:50.524 "ffdhe3072", 00:16:50.524 "ffdhe4096", 00:16:50.524 "ffdhe6144", 00:16:50.524 "ffdhe8192" 00:16:50.524 ] 00:16:50.524 } 00:16:50.524 }, 00:16:50.524 { 00:16:50.524 "method": "bdev_nvme_attach_controller", 00:16:50.524 "params": { 00:16:50.524 "name": "TLSTEST", 00:16:50.524 "trtype": "TCP", 00:16:50.524 "adrfam": "IPv4", 00:16:50.524 "traddr": "10.0.0.2", 00:16:50.524 "trsvcid": "4420", 00:16:50.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.524 "prchk_reftag": false, 00:16:50.524 "prchk_guard": false, 00:16:50.524 "ctrlr_loss_timeout_sec": 0, 00:16:50.525 "reconnect_delay_sec": 0, 00:16:50.525 "fast_io_fail_timeout_sec": 0, 00:16:50.525 "psk": "/tmp/tmp.BY84opoHNX", 00:16:50.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.525 "hdgst": false, 00:16:50.525 "ddgst": false 00:16:50.525 } 00:16:50.525 }, 00:16:50.525 { 00:16:50.525 "method": "bdev_nvme_set_hotplug", 00:16:50.525 "params": { 00:16:50.525 "period_us": 100000, 00:16:50.525 "enable": false 00:16:50.525 } 00:16:50.525 }, 00:16:50.525 { 00:16:50.525 "method": "bdev_wait_for_examine" 00:16:50.525 } 00:16:50.525 ] 00:16:50.525 }, 00:16:50.525 { 00:16:50.525 "subsystem": "nbd", 00:16:50.525 "config": [] 00:16:50.525 } 00:16:50.525 ] 00:16:50.525 }' 00:16:50.525 [2024-05-15 00:53:37.308246] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:16:50.525 [2024-05-15 00:53:37.308346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4024385 ] 00:16:50.525 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.525 [2024-05-15 00:53:37.369774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.525 [2024-05-15 00:53:37.490691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.783 [2024-05-15 00:53:37.639438] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:50.783 [2024-05-15 00:53:37.639585] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:51.350 00:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:51.350 00:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:51.350 00:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:51.608 Running I/O for 10 seconds... 00:17:01.614 00:17:01.614 Latency(us) 00:17:01.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.614 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:01.614 Verification LBA range: start 0x0 length 0x2000 00:17:01.614 TLSTESTn1 : 10.05 2527.85 9.87 0.00 0.00 50489.90 9563.40 92818.39 00:17:01.614 =================================================================================================================== 00:17:01.614 Total : 2527.85 9.87 0.00 0.00 50489.90 9563.40 92818.39 00:17:01.614 0 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4024385 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4024385 ']' 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4024385 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4024385 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4024385' 00:17:01.614 killing process with pid 4024385 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4024385 00:17:01.614 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.614 00:17:01.614 Latency(us) 00:17:01.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.614 =================================================================================================================== 00:17:01.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.614 [2024-05-15 00:53:48.568869] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:01.614 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4024385 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4024267 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4024267 ']' 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4024267 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4024267 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4024267' 00:17:01.899 killing process with pid 4024267 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4024267 00:17:01.899 [2024-05-15 00:53:48.815605] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:01.899 [2024-05-15 00:53:48.815655] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:01.899 00:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4024267 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4025488 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4025488 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4025488 ']' 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.157 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.157 [2024-05-15 00:53:49.092824] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:02.157 [2024-05-15 00:53:49.092916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.157 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.157 [2024-05-15 00:53:49.156571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.415 [2024-05-15 00:53:49.272739] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.415 [2024-05-15 00:53:49.272801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.415 [2024-05-15 00:53:49.272817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.415 [2024-05-15 00:53:49.272829] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.415 [2024-05-15 00:53:49.272841] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.415 [2024-05-15 00:53:49.272870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.415 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.415 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:02.415 00:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.415 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.415 00:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.415 00:53:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.415 00:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.BY84opoHNX 00:17:02.415 00:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BY84opoHNX 00:17:02.415 00:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:02.673 [2024-05-15 00:53:49.685041] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.673 00:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:03.238 00:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:03.238 [2024-05-15 00:53:50.274581] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:03.238 [2024-05-15 00:53:50.274695] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:03.238 [2024-05-15 00:53:50.274882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.238 00:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:03.803 malloc0 00:17:03.803 00:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:04.060 00:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BY84opoHNX 00:17:04.318 [2024-05-15 00:53:51.163510] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4025708 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4025708 /var/tmp/bdevperf.sock 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4025708 ']' 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:04.318 00:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.318 [2024-05-15 00:53:51.229919] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:04.318 [2024-05-15 00:53:51.230031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025708 ] 00:17:04.318 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.318 [2024-05-15 00:53:51.290477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.576 [2024-05-15 00:53:51.410149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.576 00:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:04.576 00:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:04.576 00:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BY84opoHNX 00:17:04.835 00:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:05.092 [2024-05-15 00:53:52.074869] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.349 nvme0n1 00:17:05.349 00:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:05.349 Running I/O for 1 seconds... 00:17:06.282 00:17:06.282 Latency(us) 00:17:06.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.282 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:06.282 Verification LBA range: start 0x0 length 0x2000 00:17:06.282 nvme0n1 : 1.07 1305.85 5.10 0.00 0.00 94922.81 7281.78 76895.57 00:17:06.282 =================================================================================================================== 00:17:06.282 Total : 1305.85 5.10 0.00 0.00 94922.81 7281.78 76895.57 00:17:06.541 0 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4025708 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4025708 ']' 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4025708 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4025708 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4025708' 00:17:06.541 killing process with pid 4025708 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4025708 00:17:06.541 Received shutdown signal, test time was about 1.000000 seconds 00:17:06.541 00:17:06.541 Latency(us) 00:17:06.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.541 =================================================================================================================== 00:17:06.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4025708 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4025488 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4025488 ']' 00:17:06.541 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4025488 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4025488 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4025488' 00:17:06.801 killing process with pid 4025488 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4025488 00:17:06.801 [2024-05-15 00:53:53.627760] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:06.801 [2024-05-15 00:53:53.627816] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4025488 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:06.801 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.061 00:53:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4025929 00:17:07.061 00:53:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:07.061 00:53:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4025929 00:17:07.061 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4025929 ']' 00:17:07.061 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.061 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:07.061 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.061 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:07.061 00:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.061 [2024-05-15 00:53:53.912202] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:07.061 [2024-05-15 00:53:53.912296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.061 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.061 [2024-05-15 00:53:53.976881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.061 [2024-05-15 00:53:54.093954] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.061 [2024-05-15 00:53:54.094027] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.061 [2024-05-15 00:53:54.094042] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.061 [2024-05-15 00:53:54.094059] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.061 [2024-05-15 00:53:54.094070] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.061 [2024-05-15 00:53:54.094100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.320 [2024-05-15 00:53:54.234250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.320 malloc0 00:17:07.320 [2024-05-15 00:53:54.264818] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:07.320 [2024-05-15 00:53:54.264913] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:07.320 [2024-05-15 00:53:54.265128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=4026039 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 4026039 /var/tmp/bdevperf.sock 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4026039 ']' 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:07.320 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.320 [2024-05-15 00:53:54.339665] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:07.320 [2024-05-15 00:53:54.339756] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026039 ] 00:17:07.320 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.579 [2024-05-15 00:53:54.399908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.579 [2024-05-15 00:53:54.516533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.579 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:07.579 00:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:07.579 00:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BY84opoHNX 00:17:08.144 00:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:08.144 [2024-05-15 00:53:55.119508] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:08.144 nvme0n1 00:17:08.403 00:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:08.403 Running I/O for 1 seconds... 00:17:09.337 00:17:09.337 Latency(us) 00:17:09.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.338 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:09.338 Verification LBA range: start 0x0 length 0x2000 00:17:09.338 nvme0n1 : 1.05 2344.80 9.16 0.00 0.00 53440.01 7184.69 88158.06 00:17:09.338 =================================================================================================================== 00:17:09.338 Total : 2344.80 9.16 0.00 0.00 53440.01 7184.69 88158.06 00:17:09.338 0 00:17:09.338 00:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:17:09.338 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.338 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.596 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.596 00:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:17:09.596 "subsystems": [ 00:17:09.596 { 00:17:09.596 "subsystem": "keyring", 00:17:09.596 "config": [ 00:17:09.596 { 00:17:09.596 "method": "keyring_file_add_key", 00:17:09.596 "params": { 00:17:09.596 "name": "key0", 00:17:09.596 "path": "/tmp/tmp.BY84opoHNX" 00:17:09.596 } 00:17:09.596 } 00:17:09.596 ] 00:17:09.596 }, 00:17:09.596 { 00:17:09.596 "subsystem": "iobuf", 00:17:09.596 "config": [ 00:17:09.596 { 00:17:09.596 "method": "iobuf_set_options", 00:17:09.596 "params": { 00:17:09.596 "small_pool_count": 8192, 00:17:09.596 "large_pool_count": 1024, 00:17:09.596 "small_bufsize": 8192, 00:17:09.596 "large_bufsize": 135168 00:17:09.596 } 00:17:09.596 } 00:17:09.596 ] 00:17:09.596 }, 00:17:09.596 { 00:17:09.596 "subsystem": "sock", 00:17:09.596 "config": [ 00:17:09.596 { 00:17:09.596 "method": "sock_impl_set_options", 00:17:09.596 "params": { 00:17:09.596 "impl_name": "posix", 00:17:09.596 "recv_buf_size": 2097152, 00:17:09.596 "send_buf_size": 2097152, 00:17:09.596 "enable_recv_pipe": true, 00:17:09.596 "enable_quickack": false, 00:17:09.596 "enable_placement_id": 0, 00:17:09.596 "enable_zerocopy_send_server": true, 00:17:09.596 "enable_zerocopy_send_client": false, 00:17:09.596 "zerocopy_threshold": 0, 00:17:09.596 "tls_version": 0, 00:17:09.596 "enable_ktls": false 00:17:09.596 } 00:17:09.596 }, 00:17:09.596 { 00:17:09.596 "method": "sock_impl_set_options", 00:17:09.596 "params": { 00:17:09.596 "impl_name": "ssl", 00:17:09.596 "recv_buf_size": 4096, 00:17:09.596 "send_buf_size": 4096, 00:17:09.596 "enable_recv_pipe": true, 00:17:09.596 "enable_quickack": false, 00:17:09.596 "enable_placement_id": 0, 00:17:09.596 "enable_zerocopy_send_server": true, 00:17:09.596 "enable_zerocopy_send_client": false, 00:17:09.596 "zerocopy_threshold": 0, 00:17:09.596 "tls_version": 0, 00:17:09.596 "enable_ktls": false 00:17:09.596 } 00:17:09.596 } 00:17:09.596 ] 00:17:09.596 }, 00:17:09.596 { 00:17:09.596 "subsystem": "vmd", 00:17:09.596 "config": [] 00:17:09.596 }, 00:17:09.596 { 00:17:09.596 "subsystem": "accel", 00:17:09.596 "config": [ 00:17:09.596 { 00:17:09.596 "method": "accel_set_options", 00:17:09.596 "params": { 00:17:09.596 "small_cache_size": 128, 00:17:09.596 "large_cache_size": 16, 00:17:09.596 "task_count": 2048, 00:17:09.596 "sequence_count": 2048, 00:17:09.596 "buf_count": 2048 00:17:09.596 } 00:17:09.596 } 00:17:09.596 ] 00:17:09.596 }, 00:17:09.596 { 00:17:09.596 "subsystem": "bdev", 00:17:09.596 "config": [ 00:17:09.596 { 00:17:09.596 "method": "bdev_set_options", 00:17:09.596 "params": { 00:17:09.596 "bdev_io_pool_size": 65535, 00:17:09.597 "bdev_io_cache_size": 256, 00:17:09.597 "bdev_auto_examine": true, 00:17:09.597 "iobuf_small_cache_size": 128, 00:17:09.597 "iobuf_large_cache_size": 16 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_raid_set_options", 00:17:09.597 "params": { 00:17:09.597 "process_window_size_kb": 1024 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_iscsi_set_options", 00:17:09.597 "params": { 00:17:09.597 "timeout_sec": 30 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_nvme_set_options", 00:17:09.597 "params": { 00:17:09.597 "action_on_timeout": "none", 00:17:09.597 "timeout_us": 0, 00:17:09.597 "timeout_admin_us": 0, 00:17:09.597 "keep_alive_timeout_ms": 10000, 00:17:09.597 "arbitration_burst": 0, 00:17:09.597 "low_priority_weight": 0, 00:17:09.597 "medium_priority_weight": 0, 00:17:09.597 "high_priority_weight": 0, 00:17:09.597 "nvme_adminq_poll_period_us": 10000, 00:17:09.597 "nvme_ioq_poll_period_us": 0, 00:17:09.597 "io_queue_requests": 0, 00:17:09.597 "delay_cmd_submit": true, 00:17:09.597 "transport_retry_count": 4, 00:17:09.597 "bdev_retry_count": 3, 00:17:09.597 "transport_ack_timeout": 0, 00:17:09.597 "ctrlr_loss_timeout_sec": 0, 00:17:09.597 "reconnect_delay_sec": 0, 00:17:09.597 "fast_io_fail_timeout_sec": 0, 00:17:09.597 "disable_auto_failback": false, 00:17:09.597 "generate_uuids": false, 00:17:09.597 "transport_tos": 0, 00:17:09.597 "nvme_error_stat": false, 00:17:09.597 "rdma_srq_size": 0, 00:17:09.597 "io_path_stat": false, 00:17:09.597 "allow_accel_sequence": false, 00:17:09.597 "rdma_max_cq_size": 0, 00:17:09.597 "rdma_cm_event_timeout_ms": 0, 00:17:09.597 "dhchap_digests": [ 00:17:09.597 "sha256", 00:17:09.597 "sha384", 00:17:09.597 "sha512" 00:17:09.597 ], 00:17:09.597 "dhchap_dhgroups": [ 00:17:09.597 "null", 00:17:09.597 "ffdhe2048", 00:17:09.597 "ffdhe3072", 00:17:09.597 "ffdhe4096", 00:17:09.597 "ffdhe6144", 00:17:09.597 "ffdhe8192" 00:17:09.597 ] 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_nvme_set_hotplug", 00:17:09.597 "params": { 00:17:09.597 "period_us": 100000, 00:17:09.597 "enable": false 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_malloc_create", 00:17:09.597 "params": { 00:17:09.597 "name": "malloc0", 00:17:09.597 "num_blocks": 8192, 00:17:09.597 "block_size": 4096, 00:17:09.597 "physical_block_size": 4096, 00:17:09.597 "uuid": "6295accf-1cf6-41bd-a96a-e10d63377c9c", 00:17:09.597 "optimal_io_boundary": 0 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_wait_for_examine" 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "subsystem": "nbd", 00:17:09.597 "config": [] 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "subsystem": "scheduler", 00:17:09.597 "config": [ 00:17:09.597 { 00:17:09.597 "method": "framework_set_scheduler", 00:17:09.597 "params": { 00:17:09.597 "name": "static" 00:17:09.597 } 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "subsystem": "nvmf", 00:17:09.597 "config": [ 00:17:09.597 { 00:17:09.597 "method": "nvmf_set_config", 00:17:09.597 "params": { 00:17:09.597 "discovery_filter": "match_any", 00:17:09.597 "admin_cmd_passthru": { 00:17:09.597 "identify_ctrlr": false 00:17:09.597 } 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "nvmf_set_max_subsystems", 00:17:09.597 "params": { 00:17:09.597 "max_subsystems": 1024 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "nvmf_set_crdt", 00:17:09.597 "params": { 00:17:09.597 "crdt1": 0, 00:17:09.597 "crdt2": 0, 00:17:09.597 "crdt3": 0 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "nvmf_create_transport", 00:17:09.597 "params": { 00:17:09.597 "trtype": "TCP", 00:17:09.597 "max_queue_depth": 128, 00:17:09.597 "max_io_qpairs_per_ctrlr": 127, 00:17:09.597 "in_capsule_data_size": 4096, 00:17:09.597 "max_io_size": 131072, 00:17:09.597 "io_unit_size": 131072, 00:17:09.597 "max_aq_depth": 128, 00:17:09.597 "num_shared_buffers": 511, 00:17:09.597 "buf_cache_size": 4294967295, 00:17:09.597 "dif_insert_or_strip": false, 00:17:09.597 "zcopy": false, 00:17:09.597 "c2h_success": false, 00:17:09.597 "sock_priority": 0, 00:17:09.597 "abort_timeout_sec": 1, 00:17:09.597 "ack_timeout": 0, 00:17:09.597 "data_wr_pool_size": 0 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "nvmf_create_subsystem", 00:17:09.597 "params": { 00:17:09.597 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.597 "allow_any_host": false, 00:17:09.597 "serial_number": "00000000000000000000", 00:17:09.597 "model_number": "SPDK bdev Controller", 00:17:09.597 "max_namespaces": 32, 00:17:09.597 "min_cntlid": 1, 00:17:09.597 "max_cntlid": 65519, 00:17:09.597 "ana_reporting": false 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "nvmf_subsystem_add_host", 00:17:09.597 "params": { 00:17:09.597 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.597 "host": "nqn.2016-06.io.spdk:host1", 00:17:09.597 "psk": "key0" 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "nvmf_subsystem_add_ns", 00:17:09.597 "params": { 00:17:09.597 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.597 "namespace": { 00:17:09.597 "nsid": 1, 00:17:09.597 "bdev_name": "malloc0", 00:17:09.597 "nguid": "6295ACCF1CF641BDA96AE10D63377C9C", 00:17:09.597 "uuid": "6295accf-1cf6-41bd-a96a-e10d63377c9c", 00:17:09.597 "no_auto_visible": false 00:17:09.597 } 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "nvmf_subsystem_add_listener", 00:17:09.597 "params": { 00:17:09.597 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.597 "listen_address": { 00:17:09.597 "trtype": "TCP", 00:17:09.597 "adrfam": "IPv4", 00:17:09.597 "traddr": "10.0.0.2", 00:17:09.597 "trsvcid": "4420" 00:17:09.597 }, 00:17:09.597 "secure_channel": true 00:17:09.597 } 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 }' 00:17:09.597 00:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:17:09.857 "subsystems": [ 00:17:09.857 { 00:17:09.857 "subsystem": "keyring", 00:17:09.857 "config": [ 00:17:09.857 { 00:17:09.857 "method": "keyring_file_add_key", 00:17:09.857 "params": { 00:17:09.857 "name": "key0", 00:17:09.857 "path": "/tmp/tmp.BY84opoHNX" 00:17:09.857 } 00:17:09.857 } 00:17:09.857 ] 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "subsystem": "iobuf", 00:17:09.857 "config": [ 00:17:09.857 { 00:17:09.857 "method": "iobuf_set_options", 00:17:09.857 "params": { 00:17:09.857 "small_pool_count": 8192, 00:17:09.857 "large_pool_count": 1024, 00:17:09.857 "small_bufsize": 8192, 00:17:09.857 "large_bufsize": 135168 00:17:09.857 } 00:17:09.857 } 00:17:09.857 ] 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "subsystem": "sock", 00:17:09.857 "config": [ 00:17:09.857 { 00:17:09.857 "method": "sock_impl_set_options", 00:17:09.857 "params": { 00:17:09.857 "impl_name": "posix", 00:17:09.857 "recv_buf_size": 2097152, 00:17:09.857 "send_buf_size": 2097152, 00:17:09.857 "enable_recv_pipe": true, 00:17:09.857 "enable_quickack": false, 00:17:09.857 "enable_placement_id": 0, 00:17:09.857 "enable_zerocopy_send_server": true, 00:17:09.857 "enable_zerocopy_send_client": false, 00:17:09.857 "zerocopy_threshold": 0, 00:17:09.857 "tls_version": 0, 00:17:09.857 "enable_ktls": false 00:17:09.857 } 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "method": "sock_impl_set_options", 00:17:09.857 "params": { 00:17:09.857 "impl_name": "ssl", 00:17:09.857 "recv_buf_size": 4096, 00:17:09.857 "send_buf_size": 4096, 00:17:09.857 "enable_recv_pipe": true, 00:17:09.857 "enable_quickack": false, 00:17:09.857 "enable_placement_id": 0, 00:17:09.857 "enable_zerocopy_send_server": true, 00:17:09.857 "enable_zerocopy_send_client": false, 00:17:09.857 "zerocopy_threshold": 0, 00:17:09.857 "tls_version": 0, 00:17:09.857 "enable_ktls": false 00:17:09.857 } 00:17:09.857 } 00:17:09.857 ] 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "subsystem": "vmd", 00:17:09.857 "config": [] 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "subsystem": "accel", 00:17:09.857 "config": [ 00:17:09.857 { 00:17:09.857 "method": "accel_set_options", 00:17:09.857 "params": { 00:17:09.857 "small_cache_size": 128, 00:17:09.857 "large_cache_size": 16, 00:17:09.857 "task_count": 2048, 00:17:09.857 "sequence_count": 2048, 00:17:09.857 "buf_count": 2048 00:17:09.857 } 00:17:09.857 } 00:17:09.857 ] 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "subsystem": "bdev", 00:17:09.857 "config": [ 00:17:09.857 { 00:17:09.857 "method": "bdev_set_options", 00:17:09.857 "params": { 00:17:09.857 "bdev_io_pool_size": 65535, 00:17:09.857 "bdev_io_cache_size": 256, 00:17:09.857 "bdev_auto_examine": true, 00:17:09.857 "iobuf_small_cache_size": 128, 00:17:09.857 "iobuf_large_cache_size": 16 00:17:09.857 } 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "method": "bdev_raid_set_options", 00:17:09.857 "params": { 00:17:09.857 "process_window_size_kb": 1024 00:17:09.857 } 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "method": "bdev_iscsi_set_options", 00:17:09.857 "params": { 00:17:09.857 "timeout_sec": 30 00:17:09.857 } 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "method": "bdev_nvme_set_options", 00:17:09.857 "params": { 00:17:09.857 "action_on_timeout": "none", 00:17:09.857 "timeout_us": 0, 00:17:09.857 "timeout_admin_us": 0, 00:17:09.857 "keep_alive_timeout_ms": 10000, 00:17:09.857 "arbitration_burst": 0, 00:17:09.857 "low_priority_weight": 0, 00:17:09.857 "medium_priority_weight": 0, 00:17:09.857 "high_priority_weight": 0, 00:17:09.857 "nvme_adminq_poll_period_us": 10000, 00:17:09.857 "nvme_ioq_poll_period_us": 0, 00:17:09.857 "io_queue_requests": 512, 00:17:09.857 "delay_cmd_submit": true, 00:17:09.857 "transport_retry_count": 4, 00:17:09.857 "bdev_retry_count": 3, 00:17:09.857 "transport_ack_timeout": 0, 00:17:09.857 "ctrlr_loss_timeout_sec": 0, 00:17:09.857 "reconnect_delay_sec": 0, 00:17:09.857 "fast_io_fail_timeout_sec": 0, 00:17:09.857 "disable_auto_failback": false, 00:17:09.857 "generate_uuids": false, 00:17:09.857 "transport_tos": 0, 00:17:09.857 "nvme_error_stat": false, 00:17:09.857 "rdma_srq_size": 0, 00:17:09.857 "io_path_stat": false, 00:17:09.857 "allow_accel_sequence": false, 00:17:09.857 "rdma_max_cq_size": 0, 00:17:09.857 "rdma_cm_event_timeout_ms": 0, 00:17:09.857 "dhchap_digests": [ 00:17:09.857 "sha256", 00:17:09.857 "sha384", 00:17:09.857 "sha512" 00:17:09.857 ], 00:17:09.857 "dhchap_dhgroups": [ 00:17:09.857 "null", 00:17:09.857 "ffdhe2048", 00:17:09.857 "ffdhe3072", 00:17:09.857 "ffdhe4096", 00:17:09.857 "ffdhe6144", 00:17:09.857 "ffdhe8192" 00:17:09.857 ] 00:17:09.857 } 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "method": "bdev_nvme_attach_controller", 00:17:09.857 "params": { 00:17:09.857 "name": "nvme0", 00:17:09.857 "trtype": "TCP", 00:17:09.857 "adrfam": "IPv4", 00:17:09.857 "traddr": "10.0.0.2", 00:17:09.857 "trsvcid": "4420", 00:17:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.857 "prchk_reftag": false, 00:17:09.857 "prchk_guard": false, 00:17:09.857 "ctrlr_loss_timeout_sec": 0, 00:17:09.857 "reconnect_delay_sec": 0, 00:17:09.857 "fast_io_fail_timeout_sec": 0, 00:17:09.857 "psk": "key0", 00:17:09.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:09.857 "hdgst": false, 00:17:09.857 "ddgst": false 00:17:09.857 } 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "method": "bdev_nvme_set_hotplug", 00:17:09.857 "params": { 00:17:09.857 "period_us": 100000, 00:17:09.857 "enable": false 00:17:09.857 } 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "method": "bdev_enable_histogram", 00:17:09.857 "params": { 00:17:09.857 "name": "nvme0n1", 00:17:09.857 "enable": true 00:17:09.857 } 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "method": "bdev_wait_for_examine" 00:17:09.857 } 00:17:09.857 ] 00:17:09.857 }, 00:17:09.857 { 00:17:09.857 "subsystem": "nbd", 00:17:09.857 "config": [] 00:17:09.857 } 00:17:09.857 ] 00:17:09.857 }' 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 4026039 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4026039 ']' 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4026039 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4026039 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4026039' 00:17:09.857 killing process with pid 4026039 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4026039 00:17:09.857 Received shutdown signal, test time was about 1.000000 seconds 00:17:09.857 00:17:09.857 Latency(us) 00:17:09.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.857 =================================================================================================================== 00:17:09.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.857 00:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4026039 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 4025929 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4025929 ']' 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4025929 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4025929 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4025929' 00:17:10.117 killing process with pid 4025929 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4025929 00:17:10.117 [2024-05-15 00:53:57.112342] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:10.117 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4025929 00:17:10.376 00:53:57 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:17:10.376 00:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.376 00:53:57 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:17:10.376 "subsystems": [ 00:17:10.376 { 00:17:10.376 "subsystem": "keyring", 00:17:10.376 "config": [ 00:17:10.376 { 00:17:10.376 "method": "keyring_file_add_key", 00:17:10.376 "params": { 00:17:10.376 "name": "key0", 00:17:10.376 "path": "/tmp/tmp.BY84opoHNX" 00:17:10.376 } 00:17:10.376 } 00:17:10.376 ] 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "subsystem": "iobuf", 00:17:10.376 "config": [ 00:17:10.376 { 00:17:10.376 "method": "iobuf_set_options", 00:17:10.376 "params": { 00:17:10.376 "small_pool_count": 8192, 00:17:10.376 "large_pool_count": 1024, 00:17:10.376 "small_bufsize": 8192, 00:17:10.376 "large_bufsize": 135168 00:17:10.376 } 00:17:10.376 } 00:17:10.376 ] 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "subsystem": "sock", 00:17:10.376 "config": [ 00:17:10.376 { 00:17:10.376 "method": "sock_impl_set_options", 00:17:10.376 "params": { 00:17:10.376 "impl_name": "posix", 00:17:10.376 "recv_buf_size": 2097152, 00:17:10.376 "send_buf_size": 2097152, 00:17:10.376 "enable_recv_pipe": true, 00:17:10.376 "enable_quickack": false, 00:17:10.376 "enable_placement_id": 0, 00:17:10.376 "enable_zerocopy_send_server": true, 00:17:10.376 "enable_zerocopy_send_client": false, 00:17:10.376 "zerocopy_threshold": 0, 00:17:10.376 "tls_version": 0, 00:17:10.376 "enable_ktls": false 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "sock_impl_set_options", 00:17:10.376 "params": { 00:17:10.376 "impl_name": "ssl", 00:17:10.376 "recv_buf_size": 4096, 00:17:10.376 "send_buf_size": 4096, 00:17:10.376 "enable_recv_pipe": true, 00:17:10.376 "enable_quickack": false, 00:17:10.376 "enable_placement_id": 0, 00:17:10.376 "enable_zerocopy_send_server": true, 00:17:10.376 "enable_zerocopy_send_client": false, 00:17:10.376 "zerocopy_threshold": 0, 00:17:10.376 "tls_version": 0, 00:17:10.376 "enable_ktls": false 00:17:10.376 } 00:17:10.376 } 00:17:10.376 ] 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "subsystem": "vmd", 00:17:10.376 "config": [] 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "subsystem": "accel", 00:17:10.376 "config": [ 00:17:10.376 { 00:17:10.376 "method": "accel_set_options", 00:17:10.376 "params": { 00:17:10.376 "small_cache_size": 128, 00:17:10.376 "large_cache_size": 16, 00:17:10.376 "task_count": 2048, 00:17:10.376 "sequence_count": 2048, 00:17:10.376 "buf_count": 2048 00:17:10.376 } 00:17:10.376 } 00:17:10.376 ] 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "subsystem": "bdev", 00:17:10.376 "config": [ 00:17:10.376 { 00:17:10.376 "method": "bdev_set_options", 00:17:10.376 "params": { 00:17:10.376 "bdev_io_pool_size": 65535, 00:17:10.376 "bdev_io_cache_size": 256, 00:17:10.376 "bdev_auto_examine": true, 00:17:10.376 "iobuf_small_cache_size": 128, 00:17:10.376 "iobuf_large_cache_size": 16 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "bdev_raid_set_options", 00:17:10.376 "params": { 00:17:10.376 "process_window_size_kb": 1024 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "bdev_iscsi_set_options", 00:17:10.376 "params": { 00:17:10.376 "timeout_sec": 30 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "bdev_nvme_set_options", 00:17:10.376 "params": { 00:17:10.376 "action_on_timeout": "none", 00:17:10.376 "timeout_us": 0, 00:17:10.376 "timeout_admin_us": 0, 00:17:10.376 "keep_alive_timeout_ms": 10000, 00:17:10.376 "arbitration_burst": 0, 00:17:10.376 "low_priority_weight": 0, 00:17:10.376 "medium_priority_weight": 0, 00:17:10.376 "high_priority_weight": 0, 00:17:10.376 "nvme_adminq_poll_period_us": 10000, 00:17:10.376 "nvme_ioq_poll_period_us": 0, 00:17:10.376 "io_queue_requests": 0, 00:17:10.376 "delay_cmd_submit": true, 00:17:10.376 "transport_retry_count": 4, 00:17:10.376 "bdev_retry_count": 3, 00:17:10.376 "transport_ack_timeout": 0, 00:17:10.376 "ctrlr_loss_timeout_sec": 0, 00:17:10.376 "reconnect_delay_sec": 0, 00:17:10.376 "fast_io_fail_timeout_sec": 0, 00:17:10.376 "disable_auto_failback": false, 00:17:10.376 "generate_uuids": false, 00:17:10.376 "transport_tos": 0, 00:17:10.376 "nvme_error_stat": false, 00:17:10.376 "rdma_srq_size": 0, 00:17:10.376 "io_path_stat": false, 00:17:10.376 "allow_accel_sequence": false, 00:17:10.376 "rdma_max_cq_size": 0, 00:17:10.376 "rdma_cm_event_timeout_ms": 0, 00:17:10.376 "dhchap_digests": [ 00:17:10.376 "sha256", 00:17:10.376 "sha384", 00:17:10.376 "sha512" 00:17:10.376 ], 00:17:10.376 "dhchap_dhgroups": [ 00:17:10.376 "null", 00:17:10.376 "ffdhe2048", 00:17:10.376 "ffdhe3072", 00:17:10.376 "ffdhe4096", 00:17:10.376 "ffdhe6144", 00:17:10.376 "ffdhe8192" 00:17:10.376 ] 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "bdev_nvme_set_hotplug", 00:17:10.376 "params": { 00:17:10.376 "period_us": 100000, 00:17:10.376 "enable": false 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "bdev_malloc_create", 00:17:10.376 "params": { 00:17:10.376 "name": "malloc0", 00:17:10.376 "num_blocks": 8192, 00:17:10.376 "block_size": 4096, 00:17:10.376 "physical_block_size": 4096, 00:17:10.376 "uuid": "6295accf-1cf6-41bd-a96a-e10d63377c9c", 00:17:10.376 "optimal_io_boundary": 0 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "bdev_wait_for_examine" 00:17:10.376 } 00:17:10.376 ] 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "subsystem": "nbd", 00:17:10.376 "config": [] 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "subsystem": "scheduler", 00:17:10.376 "config": [ 00:17:10.376 { 00:17:10.376 "method": "framework_set_scheduler", 00:17:10.376 "params": { 00:17:10.376 "name": "static" 00:17:10.376 } 00:17:10.376 } 00:17:10.376 ] 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "subsystem": "nvmf", 00:17:10.376 "config": [ 00:17:10.376 { 00:17:10.376 "method": "nvmf_set_config", 00:17:10.376 "params": { 00:17:10.376 "discovery_filter": "match_any", 00:17:10.376 "admin_cmd_passthru": { 00:17:10.376 "identify_ctrlr": false 00:17:10.376 } 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "nvmf_set_max_subsystems", 00:17:10.376 "params": { 00:17:10.376 "max_subsystems": 1024 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "nvmf_set_crdt", 00:17:10.376 "params": { 00:17:10.376 "crdt1": 0, 00:17:10.376 "crdt2": 0, 00:17:10.376 "crdt3": 0 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "nvmf_create_transport", 00:17:10.376 "params": { 00:17:10.376 "trtype": "TCP", 00:17:10.376 "max_queue_depth": 128, 00:17:10.376 "max_io_qpairs_per_ctrlr": 127, 00:17:10.376 "in_capsule_data_size": 4096, 00:17:10.376 "max_io_size": 131072, 00:17:10.376 "io_unit_size": 131072, 00:17:10.376 "max_aq_depth": 128, 00:17:10.376 "num_shared_buffers": 511, 00:17:10.376 "buf_cache_size": 4294967295, 00:17:10.376 "dif_insert_or_strip": false, 00:17:10.376 "zcopy": false, 00:17:10.376 "c2h_success": false, 00:17:10.376 "sock_priority": 0, 00:17:10.376 "abort_timeout_sec": 1, 00:17:10.376 "ack_timeout": 0, 00:17:10.376 "data_wr_pool_size": 0 00:17:10.376 } 00:17:10.376 }, 00:17:10.376 { 00:17:10.376 "method": "nvmf_create_subsystem", 00:17:10.376 "params": { 00:17:10.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.376 "allow_any_host": false, 00:17:10.376 "serial_number": "00000000000000000000", 00:17:10.376 "model_number": "SPDK bdev Controller", 00:17:10.376 "max_namespaces": 32, 00:17:10.376 "min_cntlid": 1, 00:17:10.377 "max_cntlid": 65519, 00:17:10.377 "ana_reporting": false 00:17:10.377 } 00:17:10.377 }, 00:17:10.377 { 00:17:10.377 "method": "nvmf_subsystem_add_host", 00:17:10.377 "params": { 00:17:10.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.377 "host": "nqn.2016-06.io.spdk:host1", 00:17:10.377 "psk": "key0" 00:17:10.377 } 00:17:10.377 }, 00:17:10.377 { 00:17:10.377 "method": "nvmf_subsystem_add_ns", 00:17:10.377 "params": { 00:17:10.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.377 "namespace": { 00:17:10.377 "nsid": 1, 00:17:10.377 "bdev_name": "malloc0", 00:17:10.377 "nguid": "6295ACCF1CF641BDA96AE10D63377C9C", 00:17:10.377 "uuid": "6295accf-1cf6-41bd-a96a-e10d63377c9c", 00:17:10.377 "no_auto_visible": false 00:17:10.377 } 00:17:10.377 } 00:17:10.377 }, 00:17:10.377 { 00:17:10.377 "method": "nvmf_subsystem_add_listener", 00:17:10.377 "params": { 00:17:10.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.377 "listen_address": { 00:17:10.377 "trtype": "TCP", 00:17:10.377 "adrfam": "IPv4", 00:17:10.377 "traddr": "10.0.0.2", 00:17:10.377 "trsvcid": "4420" 00:17:10.377 }, 00:17:10.377 "secure_channel": true 00:17:10.377 } 00:17:10.377 } 00:17:10.377 ] 00:17:10.377 } 00:17:10.377 ] 00:17:10.377 }' 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4026328 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4026328 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4026328 ']' 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:10.377 00:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.377 [2024-05-15 00:53:57.395023] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:10.377 [2024-05-15 00:53:57.395106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.377 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.643 [2024-05-15 00:53:57.459266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.643 [2024-05-15 00:53:57.574316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.643 [2024-05-15 00:53:57.574377] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.643 [2024-05-15 00:53:57.574392] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.643 [2024-05-15 00:53:57.574405] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.643 [2024-05-15 00:53:57.574417] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.643 [2024-05-15 00:53:57.574502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.905 [2024-05-15 00:53:57.798828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.905 [2024-05-15 00:53:57.830779] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:10.905 [2024-05-15 00:53:57.830853] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:10.905 [2024-05-15 00:53:57.839145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=4026448 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 4026448 /var/tmp/bdevperf.sock 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4026448 ']' 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.472 00:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:17:11.472 "subsystems": [ 00:17:11.472 { 00:17:11.472 "subsystem": "keyring", 00:17:11.472 "config": [ 00:17:11.472 { 00:17:11.472 "method": "keyring_file_add_key", 00:17:11.472 "params": { 00:17:11.472 "name": "key0", 00:17:11.472 "path": "/tmp/tmp.BY84opoHNX" 00:17:11.472 } 00:17:11.472 } 00:17:11.472 ] 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "subsystem": "iobuf", 00:17:11.472 "config": [ 00:17:11.472 { 00:17:11.472 "method": "iobuf_set_options", 00:17:11.472 "params": { 00:17:11.472 "small_pool_count": 8192, 00:17:11.472 "large_pool_count": 1024, 00:17:11.472 "small_bufsize": 8192, 00:17:11.472 "large_bufsize": 135168 00:17:11.472 } 00:17:11.472 } 00:17:11.472 ] 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "subsystem": "sock", 00:17:11.472 "config": [ 00:17:11.472 { 00:17:11.472 "method": "sock_impl_set_options", 00:17:11.472 "params": { 00:17:11.472 "impl_name": "posix", 00:17:11.472 "recv_buf_size": 2097152, 00:17:11.472 "send_buf_size": 2097152, 00:17:11.472 "enable_recv_pipe": true, 00:17:11.472 "enable_quickack": false, 00:17:11.472 "enable_placement_id": 0, 00:17:11.472 "enable_zerocopy_send_server": true, 00:17:11.472 "enable_zerocopy_send_client": false, 00:17:11.472 "zerocopy_threshold": 0, 00:17:11.472 "tls_version": 0, 00:17:11.472 "enable_ktls": false 00:17:11.472 } 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "method": "sock_impl_set_options", 00:17:11.472 "params": { 00:17:11.472 "impl_name": "ssl", 00:17:11.472 "recv_buf_size": 4096, 00:17:11.472 "send_buf_size": 4096, 00:17:11.472 "enable_recv_pipe": true, 00:17:11.472 "enable_quickack": false, 00:17:11.472 "enable_placement_id": 0, 00:17:11.472 "enable_zerocopy_send_server": true, 00:17:11.472 "enable_zerocopy_send_client": false, 00:17:11.472 "zerocopy_threshold": 0, 00:17:11.472 "tls_version": 0, 00:17:11.472 "enable_ktls": false 00:17:11.472 } 00:17:11.472 } 00:17:11.472 ] 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "subsystem": "vmd", 00:17:11.472 "config": [] 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "subsystem": "accel", 00:17:11.472 "config": [ 00:17:11.472 { 00:17:11.472 "method": "accel_set_options", 00:17:11.472 "params": { 00:17:11.472 "small_cache_size": 128, 00:17:11.472 "large_cache_size": 16, 00:17:11.472 "task_count": 2048, 00:17:11.472 "sequence_count": 2048, 00:17:11.472 "buf_count": 2048 00:17:11.472 } 00:17:11.472 } 00:17:11.472 ] 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "subsystem": "bdev", 00:17:11.472 "config": [ 00:17:11.472 { 00:17:11.472 "method": "bdev_set_options", 00:17:11.472 "params": { 00:17:11.472 "bdev_io_pool_size": 65535, 00:17:11.472 "bdev_io_cache_size": 256, 00:17:11.472 "bdev_auto_examine": true, 00:17:11.472 "iobuf_small_cache_size": 128, 00:17:11.472 "iobuf_large_cache_size": 16 00:17:11.472 } 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "method": "bdev_raid_set_options", 00:17:11.472 "params": { 00:17:11.472 "process_window_size_kb": 1024 00:17:11.472 } 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "method": "bdev_iscsi_set_options", 00:17:11.472 "params": { 00:17:11.472 "timeout_sec": 30 00:17:11.472 } 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "method": "bdev_nvme_set_options", 00:17:11.472 "params": { 00:17:11.472 "action_on_timeout": "none", 00:17:11.472 "timeout_us": 0, 00:17:11.472 "timeout_admin_us": 0, 00:17:11.472 "keep_alive_timeout_ms": 10000, 00:17:11.472 "arbitration_burst": 0, 00:17:11.472 "low_priority_weight": 0, 00:17:11.472 "medium_priority_weight": 0, 00:17:11.472 "high_priority_weight": 0, 00:17:11.472 "nvme_adminq_poll_period_us": 10000, 00:17:11.472 "nvme_ioq_poll_period_us": 0, 00:17:11.472 "io_queue_requests": 512, 00:17:11.472 "delay_cmd_submit": true, 00:17:11.472 "transport_retry_count": 4, 00:17:11.472 "bdev_retry_count": 3, 00:17:11.472 "transport_ack_timeout": 0, 00:17:11.472 "ctrlr_loss_timeout_sec": 0, 00:17:11.472 "reconnect_delay_sec": 0, 00:17:11.472 "fast_io_fail_timeout_sec": 0, 00:17:11.472 "disable_auto_failback": false, 00:17:11.472 "generate_uuids": false, 00:17:11.472 "transport_tos": 0, 00:17:11.472 "nvme_error_stat": false, 00:17:11.472 "rdma_srq_size": 0, 00:17:11.472 "io_path_stat": false, 00:17:11.472 "allow_accel_sequence": false, 00:17:11.472 "rdma_max_cq_size": 0, 00:17:11.472 "rdma_cm_event_timeout_ms": 0, 00:17:11.472 "dhchap_digests": [ 00:17:11.472 "sha256", 00:17:11.472 "sha384", 00:17:11.472 "sha512" 00:17:11.472 ], 00:17:11.472 "dhchap_dhgroups": [ 00:17:11.472 "null", 00:17:11.472 "ffdhe2048", 00:17:11.472 "ffdhe3072", 00:17:11.472 "ffdhe4096", 00:17:11.472 "ffdhe6144", 00:17:11.472 "ffdhe8192" 00:17:11.472 ] 00:17:11.472 } 00:17:11.472 }, 00:17:11.472 { 00:17:11.472 "method": "bdev_nvme_attach_controller", 00:17:11.472 "params": { 00:17:11.472 "name": "nvme0", 00:17:11.472 "trtype": "TCP", 00:17:11.472 "adrfam": "IPv4", 00:17:11.472 "traddr": "10.0.0.2", 00:17:11.472 "trsvcid": "4420", 00:17:11.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.472 "prchk_reftag": false, 00:17:11.472 "prchk_guard": false, 00:17:11.472 "ctrlr_loss_timeout_sec": 0, 00:17:11.473 "reconnect_delay_sec": 0, 00:17:11.473 "fast_io_fail_timeout_sec": 0, 00:17:11.473 "psk": "key0", 00:17:11.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.473 "hdgst": false, 00:17:11.473 "ddgst": false 00:17:11.473 } 00:17:11.473 }, 00:17:11.473 { 00:17:11.473 "method": "bdev_nvme_set_hotplug", 00:17:11.473 "params": { 00:17:11.473 "period_us": 100000, 00:17:11.473 "enable": false 00:17:11.473 } 00:17:11.473 }, 00:17:11.473 { 00:17:11.473 "method": "bdev_enable_histogram", 00:17:11.473 "params": { 00:17:11.473 "name": "nvme0n1", 00:17:11.473 "enable": true 00:17:11.473 } 00:17:11.473 }, 00:17:11.473 { 00:17:11.473 "method": "bdev_wait_for_examine" 00:17:11.473 } 00:17:11.473 ] 00:17:11.473 }, 00:17:11.473 { 00:17:11.473 "subsystem": "nbd", 00:17:11.473 "config": [] 00:17:11.473 } 00:17:11.473 ] 00:17:11.473 }' 00:17:11.473 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:11.473 00:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.473 [2024-05-15 00:53:58.496880] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:11.473 [2024-05-15 00:53:58.497030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026448 ] 00:17:11.473 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.731 [2024-05-15 00:53:58.557409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.731 [2024-05-15 00:53:58.673883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.989 [2024-05-15 00:53:58.836255] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.555 00:53:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:12.555 00:53:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:12.555 00:53:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:12.555 00:53:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:17:12.813 00:53:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.813 00:53:59 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:13.093 Running I/O for 1 seconds... 00:17:14.027 00:17:14.027 Latency(us) 00:17:14.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.027 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:14.027 Verification LBA range: start 0x0 length 0x2000 00:17:14.027 nvme0n1 : 1.07 1660.36 6.49 0.00 0.00 75011.17 8446.86 108741.21 00:17:14.027 =================================================================================================================== 00:17:14.027 Total : 1660.36 6.49 0.00 0.00 75011.17 8446.86 108741.21 00:17:14.027 0 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:14.027 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:14.027 nvmf_trace.0 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4026448 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4026448 ']' 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4026448 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4026448 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4026448' 00:17:14.286 killing process with pid 4026448 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4026448 00:17:14.286 Received shutdown signal, test time was about 1.000000 seconds 00:17:14.286 00:17:14.286 Latency(us) 00:17:14.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.286 =================================================================================================================== 00:17:14.286 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:14.286 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4026448 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.545 rmmod nvme_tcp 00:17:14.545 rmmod nvme_fabrics 00:17:14.545 rmmod nvme_keyring 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4026328 ']' 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4026328 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4026328 ']' 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4026328 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4026328 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4026328' 00:17:14.545 killing process with pid 4026328 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4026328 00:17:14.545 [2024-05-15 00:54:01.423085] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:14.545 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4026328 00:17:14.805 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.805 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:14.805 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:14.805 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.805 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.805 00:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.805 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.805 00:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.715 00:54:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:16.715 00:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.acxocC8CeT /tmp/tmp.N2ugsRLZ08 /tmp/tmp.BY84opoHNX 00:17:16.715 00:17:16.715 real 1m20.693s 00:17:16.715 user 2m11.396s 00:17:16.715 sys 0m26.517s 00:17:16.715 00:54:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:16.715 00:54:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.715 ************************************ 00:17:16.715 END TEST nvmf_tls 00:17:16.715 ************************************ 00:17:16.715 00:54:03 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:16.715 00:54:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:16.715 00:54:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:16.715 00:54:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.715 ************************************ 00:17:16.715 START TEST nvmf_fips 00:17:16.715 ************************************ 00:17:16.715 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:16.976 * Looking for test storage... 00:17:16.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.976 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:16.977 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:17:16.978 Error setting digest 00:17:16.978 000288197E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:16.978 000288197E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:17:16.978 00:54:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.884 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:18.885 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:18.885 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:18.885 Found net devices under 0000:08:00.0: cvl_0_0 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:18.885 Found net devices under 0000:08:00.1: cvl_0_1 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:18.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:17:18.885 00:17:18.885 --- 10.0.0.2 ping statistics --- 00:17:18.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.885 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:17:18.885 00:17:18.885 --- 10.0.0.1 ping statistics --- 00:17:18.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.885 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.885 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4028329 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4028329 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 4028329 ']' 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:18.886 00:54:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:18.886 [2024-05-15 00:54:05.851665] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:18.886 [2024-05-15 00:54:05.851755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.886 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.886 [2024-05-15 00:54:05.914963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.146 [2024-05-15 00:54:06.030171] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.146 [2024-05-15 00:54:06.030234] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.146 [2024-05-15 00:54:06.030258] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.146 [2024-05-15 00:54:06.030271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.146 [2024-05-15 00:54:06.030283] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.146 [2024-05-15 00:54:06.030312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:19.146 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:19.405 [2024-05-15 00:54:06.429091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.405 [2024-05-15 00:54:06.445014] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:19.405 [2024-05-15 00:54:06.445087] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:19.405 [2024-05-15 00:54:06.445276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.666 [2024-05-15 00:54:06.475654] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:19.666 malloc0 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4028447 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4028447 /var/tmp/bdevperf.sock 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 4028447 ']' 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:19.666 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:19.666 [2024-05-15 00:54:06.581019] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:19.666 [2024-05-15 00:54:06.581122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028447 ] 00:17:19.666 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.666 [2024-05-15 00:54:06.641760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.925 [2024-05-15 00:54:06.761296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.925 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:19.925 00:54:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:17:19.925 00:54:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:20.184 [2024-05-15 00:54:07.137625] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.184 [2024-05-15 00:54:07.137772] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:20.184 TLSTESTn1 00:17:20.184 00:54:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:20.443 Running I/O for 10 seconds... 00:17:30.413 00:17:30.413 Latency(us) 00:17:30.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.413 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:30.413 Verification LBA range: start 0x0 length 0x2000 00:17:30.413 TLSTESTn1 : 10.05 2469.56 9.65 0.00 0.00 51681.71 7427.41 76118.85 00:17:30.413 =================================================================================================================== 00:17:30.413 Total : 2469.56 9.65 0.00 0.00 51681.71 7427.41 76118.85 00:17:30.413 0 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:30.413 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:30.413 nvmf_trace.0 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4028447 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 4028447 ']' 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 4028447 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4028447 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4028447' 00:17:30.671 killing process with pid 4028447 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 4028447 00:17:30.671 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.671 00:17:30.671 Latency(us) 00:17:30.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.671 =================================================================================================================== 00:17:30.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.671 [2024-05-15 00:54:17.538064] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:30.671 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 4028447 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.930 rmmod nvme_tcp 00:17:30.930 rmmod nvme_fabrics 00:17:30.930 rmmod nvme_keyring 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4028329 ']' 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4028329 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 4028329 ']' 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 4028329 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4028329 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4028329' 00:17:30.930 killing process with pid 4028329 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 4028329 00:17:30.930 [2024-05-15 00:54:17.820425] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:30.930 [2024-05-15 00:54:17.820466] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:30.930 00:54:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 4028329 00:17:31.190 00:54:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:31.190 00:54:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:31.190 00:54:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:31.190 00:54:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:31.190 00:54:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:31.190 00:54:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.190 00:54:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.190 00:54:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.096 00:54:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:33.096 00:54:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:33.096 00:17:33.096 real 0m16.340s 00:17:33.096 user 0m20.967s 00:17:33.096 sys 0m5.899s 00:17:33.096 00:54:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.096 00:54:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:33.096 ************************************ 00:17:33.096 END TEST nvmf_fips 00:17:33.096 ************************************ 00:17:33.096 00:54:20 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:17:33.096 00:54:20 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:17:33.096 00:54:20 nvmf_tcp -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:17:33.096 00:54:20 nvmf_tcp -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:17:33.096 00:54:20 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.096 00:54:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:34.998 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:34.998 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.998 00:54:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:34.999 Found net devices under 0000:08:00.0: cvl_0_0 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:34.999 Found net devices under 0000:08:00.1: cvl_0_1 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:17:34.999 00:54:21 nvmf_tcp -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:34.999 00:54:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:34.999 00:54:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:34.999 00:54:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:34.999 ************************************ 00:17:34.999 START TEST nvmf_perf_adq 00:17:34.999 ************************************ 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:34.999 * Looking for test storage... 00:17:34.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:17:34.999 00:54:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:36.907 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.907 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:36.907 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:36.908 Found net devices under 0000:08:00.0: cvl_0_0 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:36.908 Found net devices under 0000:08:00.1: cvl_0_1 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:17:36.908 00:54:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:17:37.166 00:54:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:17:38.628 00:54:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:43.900 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:43.900 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:43.900 Found net devices under 0000:08:00.0: cvl_0_0 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:43.900 Found net devices under 0000:08:00.1: cvl_0_1 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.900 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:43.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:17:43.901 00:17:43.901 --- 10.0.0.2 ping statistics --- 00:17:43.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.901 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:17:43.901 00:17:43.901 --- 10.0.0.1 ping statistics --- 00:17:43.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.901 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4033357 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4033357 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 4033357 ']' 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:43.901 [2024-05-15 00:54:30.707910] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:17:43.901 [2024-05-15 00:54:30.708019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.901 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.901 [2024-05-15 00:54:30.777716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.901 [2024-05-15 00:54:30.899796] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.901 [2024-05-15 00:54:30.899859] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.901 [2024-05-15 00:54:30.899875] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.901 [2024-05-15 00:54:30.899888] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.901 [2024-05-15 00:54:30.899900] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.901 [2024-05-15 00:54:30.899968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.901 [2024-05-15 00:54:30.900055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.901 [2024-05-15 00:54:30.900139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.901 [2024-05-15 00:54:30.900171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.901 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 00:54:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 [2024-05-15 00:54:31.137540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 Malloc1 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 [2024-05-15 00:54:31.187401] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:44.160 [2024-05-15 00:54:31.187704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4033388 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:17:44.160 00:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:44.419 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:17:46.321 "tick_rate": 2700000000, 00:17:46.321 "poll_groups": [ 00:17:46.321 { 00:17:46.321 "name": "nvmf_tgt_poll_group_000", 00:17:46.321 "admin_qpairs": 1, 00:17:46.321 "io_qpairs": 1, 00:17:46.321 "current_admin_qpairs": 1, 00:17:46.321 "current_io_qpairs": 1, 00:17:46.321 "pending_bdev_io": 0, 00:17:46.321 "completed_nvme_io": 14975, 00:17:46.321 "transports": [ 00:17:46.321 { 00:17:46.321 "trtype": "TCP" 00:17:46.321 } 00:17:46.321 ] 00:17:46.321 }, 00:17:46.321 { 00:17:46.321 "name": "nvmf_tgt_poll_group_001", 00:17:46.321 "admin_qpairs": 0, 00:17:46.321 "io_qpairs": 1, 00:17:46.321 "current_admin_qpairs": 0, 00:17:46.321 "current_io_qpairs": 1, 00:17:46.321 "pending_bdev_io": 0, 00:17:46.321 "completed_nvme_io": 19154, 00:17:46.321 "transports": [ 00:17:46.321 { 00:17:46.321 "trtype": "TCP" 00:17:46.321 } 00:17:46.321 ] 00:17:46.321 }, 00:17:46.321 { 00:17:46.321 "name": "nvmf_tgt_poll_group_002", 00:17:46.321 "admin_qpairs": 0, 00:17:46.321 "io_qpairs": 1, 00:17:46.321 "current_admin_qpairs": 0, 00:17:46.321 "current_io_qpairs": 1, 00:17:46.321 "pending_bdev_io": 0, 00:17:46.321 "completed_nvme_io": 16577, 00:17:46.321 "transports": [ 00:17:46.321 { 00:17:46.321 "trtype": "TCP" 00:17:46.321 } 00:17:46.321 ] 00:17:46.321 }, 00:17:46.321 { 00:17:46.321 "name": "nvmf_tgt_poll_group_003", 00:17:46.321 "admin_qpairs": 0, 00:17:46.321 "io_qpairs": 1, 00:17:46.321 "current_admin_qpairs": 0, 00:17:46.321 "current_io_qpairs": 1, 00:17:46.321 "pending_bdev_io": 0, 00:17:46.321 "completed_nvme_io": 18770, 00:17:46.321 "transports": [ 00:17:46.321 { 00:17:46.321 "trtype": "TCP" 00:17:46.321 } 00:17:46.321 ] 00:17:46.321 } 00:17:46.321 ] 00:17:46.321 }' 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:17:46.321 00:54:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4033388 00:17:54.434 Initializing NVMe Controllers 00:17:54.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:54.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:17:54.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:17:54.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:17:54.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:17:54.434 Initialization complete. Launching workers. 00:17:54.434 ======================================================== 00:17:54.434 Latency(us) 00:17:54.434 Device Information : IOPS MiB/s Average min max 00:17:54.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9889.39 38.63 6471.08 4174.69 7950.08 00:17:54.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10112.28 39.50 6330.12 3387.30 7926.83 00:17:54.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8798.52 34.37 7274.66 3392.06 11686.81 00:17:54.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7910.65 30.90 8093.94 2497.29 13262.96 00:17:54.434 ======================================================== 00:17:54.434 Total : 36710.84 143.40 6974.55 2497.29 13262.96 00:17:54.434 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:54.434 rmmod nvme_tcp 00:17:54.434 rmmod nvme_fabrics 00:17:54.434 rmmod nvme_keyring 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4033357 ']' 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4033357 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 4033357 ']' 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 4033357 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4033357 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4033357' 00:17:54.434 killing process with pid 4033357 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 4033357 00:17:54.434 [2024-05-15 00:54:41.413525] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:54.434 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 4033357 00:17:54.694 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.694 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.694 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.694 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.694 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.694 00:54:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.694 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.694 00:54:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.229 00:54:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:57.229 00:54:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:17:57.229 00:54:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:17:57.229 00:54:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:17:58.600 00:54:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:03.878 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:03.878 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:03.878 Found net devices under 0000:08:00.0: cvl_0_0 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:03.878 Found net devices under 0000:08:00.1: cvl_0_1 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:03.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:18:03.878 00:18:03.878 --- 10.0.0.2 ping statistics --- 00:18:03.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.878 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:18:03.878 00:18:03.878 --- 10.0.0.1 ping statistics --- 00:18:03.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.878 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:03.878 net.core.busy_poll = 1 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:03.878 net.core.busy_read = 1 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4035390 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4035390 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 4035390 ']' 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:03.878 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.879 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:03.879 00:54:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.139 [2024-05-15 00:54:50.945517] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:04.139 [2024-05-15 00:54:50.945607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.139 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.139 [2024-05-15 00:54:51.010065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.139 [2024-05-15 00:54:51.127134] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.139 [2024-05-15 00:54:51.127193] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.139 [2024-05-15 00:54:51.127209] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.139 [2024-05-15 00:54:51.127223] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.139 [2024-05-15 00:54:51.127234] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.139 [2024-05-15 00:54:51.127307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.139 [2024-05-15 00:54:51.127559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.139 [2024-05-15 00:54:51.127611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.139 [2024-05-15 00:54:51.127615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.139 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:04.139 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:18:04.139 00:54:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.139 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.139 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 [2024-05-15 00:54:51.372542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 Malloc1 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 [2024-05-15 00:54:51.422378] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:04.397 [2024-05-15 00:54:51.422644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4035438 00:18:04.397 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:18:04.398 00:54:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:04.655 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:18:06.554 "tick_rate": 2700000000, 00:18:06.554 "poll_groups": [ 00:18:06.554 { 00:18:06.554 "name": "nvmf_tgt_poll_group_000", 00:18:06.554 "admin_qpairs": 1, 00:18:06.554 "io_qpairs": 1, 00:18:06.554 "current_admin_qpairs": 1, 00:18:06.554 "current_io_qpairs": 1, 00:18:06.554 "pending_bdev_io": 0, 00:18:06.554 "completed_nvme_io": 19826, 00:18:06.554 "transports": [ 00:18:06.554 { 00:18:06.554 "trtype": "TCP" 00:18:06.554 } 00:18:06.554 ] 00:18:06.554 }, 00:18:06.554 { 00:18:06.554 "name": "nvmf_tgt_poll_group_001", 00:18:06.554 "admin_qpairs": 0, 00:18:06.554 "io_qpairs": 3, 00:18:06.554 "current_admin_qpairs": 0, 00:18:06.554 "current_io_qpairs": 3, 00:18:06.554 "pending_bdev_io": 0, 00:18:06.554 "completed_nvme_io": 23736, 00:18:06.554 "transports": [ 00:18:06.554 { 00:18:06.554 "trtype": "TCP" 00:18:06.554 } 00:18:06.554 ] 00:18:06.554 }, 00:18:06.554 { 00:18:06.554 "name": "nvmf_tgt_poll_group_002", 00:18:06.554 "admin_qpairs": 0, 00:18:06.554 "io_qpairs": 0, 00:18:06.554 "current_admin_qpairs": 0, 00:18:06.554 "current_io_qpairs": 0, 00:18:06.554 "pending_bdev_io": 0, 00:18:06.554 "completed_nvme_io": 0, 00:18:06.554 "transports": [ 00:18:06.554 { 00:18:06.554 "trtype": "TCP" 00:18:06.554 } 00:18:06.554 ] 00:18:06.554 }, 00:18:06.554 { 00:18:06.554 "name": "nvmf_tgt_poll_group_003", 00:18:06.554 "admin_qpairs": 0, 00:18:06.554 "io_qpairs": 0, 00:18:06.554 "current_admin_qpairs": 0, 00:18:06.554 "current_io_qpairs": 0, 00:18:06.554 "pending_bdev_io": 0, 00:18:06.554 "completed_nvme_io": 0, 00:18:06.554 "transports": [ 00:18:06.554 { 00:18:06.554 "trtype": "TCP" 00:18:06.554 } 00:18:06.554 ] 00:18:06.554 } 00:18:06.554 ] 00:18:06.554 }' 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:18:06.554 00:54:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4035438 00:18:14.661 Initializing NVMe Controllers 00:18:14.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:14.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:14.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:14.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:14.661 Initialization complete. Launching workers. 00:18:14.661 ======================================================== 00:18:14.661 Latency(us) 00:18:14.661 Device Information : IOPS MiB/s Average min max 00:18:14.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4270.20 16.68 14997.06 2868.16 64135.21 00:18:14.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10728.20 41.91 5965.98 1836.03 47484.88 00:18:14.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4077.40 15.93 15704.50 2093.02 62952.32 00:18:14.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4431.40 17.31 14478.38 2160.43 63430.01 00:18:14.661 ======================================================== 00:18:14.661 Total : 23507.20 91.82 10900.39 1836.03 64135.21 00:18:14.661 00:18:14.661 00:55:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:18:14.661 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.661 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:14.661 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.661 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:14.661 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.661 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.661 rmmod nvme_tcp 00:18:14.661 rmmod nvme_fabrics 00:18:14.661 rmmod nvme_keyring 00:18:14.661 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4035390 ']' 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4035390 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 4035390 ']' 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 4035390 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4035390 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4035390' 00:18:14.919 killing process with pid 4035390 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 4035390 00:18:14.919 [2024-05-15 00:55:01.751329] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:14.919 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 4035390 00:18:15.177 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.177 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.177 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.177 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.177 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.177 00:55:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.177 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.177 00:55:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.465 00:55:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:18.465 00:55:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:18:18.465 00:18:18.465 real 0m43.236s 00:18:18.465 user 2m33.109s 00:18:18.465 sys 0m11.970s 00:18:18.465 00:55:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:18.465 00:55:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.465 ************************************ 00:18:18.465 END TEST nvmf_perf_adq 00:18:18.465 ************************************ 00:18:18.465 00:55:05 nvmf_tcp -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:18.465 00:55:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:18.465 00:55:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:18.465 00:55:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:18.465 ************************************ 00:18:18.465 START TEST nvmf_shutdown 00:18:18.465 ************************************ 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:18.465 * Looking for test storage... 00:18:18.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:18.465 ************************************ 00:18:18.465 START TEST nvmf_shutdown_tc1 00:18:18.465 ************************************ 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.465 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:18.466 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:18.466 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:18.466 00:55:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:19.866 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:19.866 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:19.866 Found net devices under 0000:08:00.0: cvl_0_0 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:19.866 Found net devices under 0000:08:00.1: cvl_0_1 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.866 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:19.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:18:19.867 00:18:19.867 --- 10.0.0.2 ping statistics --- 00:18:19.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.867 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:18:19.867 00:18:19.867 --- 10.0.0.1 ping statistics --- 00:18:19.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.867 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:19.867 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4038006 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4038006 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 4038006 ']' 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:20.124 00:55:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:20.124 [2024-05-15 00:55:06.981809] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:20.124 [2024-05-15 00:55:06.981909] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.124 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.124 [2024-05-15 00:55:07.048147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.124 [2024-05-15 00:55:07.167777] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.124 [2024-05-15 00:55:07.167838] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.124 [2024-05-15 00:55:07.167853] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.124 [2024-05-15 00:55:07.167867] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.124 [2024-05-15 00:55:07.167878] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.124 [2024-05-15 00:55:07.167970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.124 [2024-05-15 00:55:07.167996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.124 [2024-05-15 00:55:07.168044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:20.124 [2024-05-15 00:55:07.168048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:20.382 [2024-05-15 00:55:07.314604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.382 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:20.382 Malloc1 00:18:20.382 [2024-05-15 00:55:07.404827] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:20.382 [2024-05-15 00:55:07.405128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.382 Malloc2 00:18:20.638 Malloc3 00:18:20.638 Malloc4 00:18:20.638 Malloc5 00:18:20.638 Malloc6 00:18:20.638 Malloc7 00:18:20.896 Malloc8 00:18:20.896 Malloc9 00:18:20.896 Malloc10 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4038093 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4038093 /var/tmp/bdevperf.sock 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 4038093 ']' 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.896 { 00:18:20.896 "params": { 00:18:20.896 "name": "Nvme$subsystem", 00:18:20.896 "trtype": "$TEST_TRANSPORT", 00:18:20.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.896 "adrfam": "ipv4", 00:18:20.896 "trsvcid": "$NVMF_PORT", 00:18:20.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.896 "hdgst": ${hdgst:-false}, 00:18:20.896 "ddgst": ${ddgst:-false} 00:18:20.896 }, 00:18:20.896 "method": "bdev_nvme_attach_controller" 00:18:20.896 } 00:18:20.896 EOF 00:18:20.896 )") 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.896 { 00:18:20.896 "params": { 00:18:20.896 "name": "Nvme$subsystem", 00:18:20.896 "trtype": "$TEST_TRANSPORT", 00:18:20.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.896 "adrfam": "ipv4", 00:18:20.896 "trsvcid": "$NVMF_PORT", 00:18:20.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.896 "hdgst": ${hdgst:-false}, 00:18:20.896 "ddgst": ${ddgst:-false} 00:18:20.896 }, 00:18:20.896 "method": "bdev_nvme_attach_controller" 00:18:20.896 } 00:18:20.896 EOF 00:18:20.896 )") 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.896 { 00:18:20.896 "params": { 00:18:20.896 "name": "Nvme$subsystem", 00:18:20.896 "trtype": "$TEST_TRANSPORT", 00:18:20.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.896 "adrfam": "ipv4", 00:18:20.896 "trsvcid": "$NVMF_PORT", 00:18:20.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.896 "hdgst": ${hdgst:-false}, 00:18:20.896 "ddgst": ${ddgst:-false} 00:18:20.896 }, 00:18:20.896 "method": "bdev_nvme_attach_controller" 00:18:20.896 } 00:18:20.896 EOF 00:18:20.896 )") 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.896 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.896 { 00:18:20.896 "params": { 00:18:20.896 "name": "Nvme$subsystem", 00:18:20.896 "trtype": "$TEST_TRANSPORT", 00:18:20.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.896 "adrfam": "ipv4", 00:18:20.896 "trsvcid": "$NVMF_PORT", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.897 "hdgst": ${hdgst:-false}, 00:18:20.897 "ddgst": ${ddgst:-false} 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 } 00:18:20.897 EOF 00:18:20.897 )") 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.897 { 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme$subsystem", 00:18:20.897 "trtype": "$TEST_TRANSPORT", 00:18:20.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "$NVMF_PORT", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.897 "hdgst": ${hdgst:-false}, 00:18:20.897 "ddgst": ${ddgst:-false} 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 } 00:18:20.897 EOF 00:18:20.897 )") 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.897 { 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme$subsystem", 00:18:20.897 "trtype": "$TEST_TRANSPORT", 00:18:20.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "$NVMF_PORT", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.897 "hdgst": ${hdgst:-false}, 00:18:20.897 "ddgst": ${ddgst:-false} 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 } 00:18:20.897 EOF 00:18:20.897 )") 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.897 { 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme$subsystem", 00:18:20.897 "trtype": "$TEST_TRANSPORT", 00:18:20.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "$NVMF_PORT", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.897 "hdgst": ${hdgst:-false}, 00:18:20.897 "ddgst": ${ddgst:-false} 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 } 00:18:20.897 EOF 00:18:20.897 )") 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.897 { 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme$subsystem", 00:18:20.897 "trtype": "$TEST_TRANSPORT", 00:18:20.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "$NVMF_PORT", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.897 "hdgst": ${hdgst:-false}, 00:18:20.897 "ddgst": ${ddgst:-false} 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 } 00:18:20.897 EOF 00:18:20.897 )") 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.897 { 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme$subsystem", 00:18:20.897 "trtype": "$TEST_TRANSPORT", 00:18:20.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "$NVMF_PORT", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.897 "hdgst": ${hdgst:-false}, 00:18:20.897 "ddgst": ${ddgst:-false} 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 } 00:18:20.897 EOF 00:18:20.897 )") 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.897 { 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme$subsystem", 00:18:20.897 "trtype": "$TEST_TRANSPORT", 00:18:20.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "$NVMF_PORT", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.897 "hdgst": ${hdgst:-false}, 00:18:20.897 "ddgst": ${ddgst:-false} 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 } 00:18:20.897 EOF 00:18:20.897 )") 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:18:20.897 00:55:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme1", 00:18:20.897 "trtype": "tcp", 00:18:20.897 "traddr": "10.0.0.2", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "4420", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.897 "hdgst": false, 00:18:20.897 "ddgst": false 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 },{ 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme2", 00:18:20.897 "trtype": "tcp", 00:18:20.897 "traddr": "10.0.0.2", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "4420", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:20.897 "hdgst": false, 00:18:20.897 "ddgst": false 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 },{ 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme3", 00:18:20.897 "trtype": "tcp", 00:18:20.897 "traddr": "10.0.0.2", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "4420", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:20.897 "hdgst": false, 00:18:20.897 "ddgst": false 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 },{ 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme4", 00:18:20.897 "trtype": "tcp", 00:18:20.897 "traddr": "10.0.0.2", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "4420", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:20.897 "hdgst": false, 00:18:20.897 "ddgst": false 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 },{ 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme5", 00:18:20.897 "trtype": "tcp", 00:18:20.897 "traddr": "10.0.0.2", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "4420", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:20.897 "hdgst": false, 00:18:20.897 "ddgst": false 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 },{ 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme6", 00:18:20.897 "trtype": "tcp", 00:18:20.897 "traddr": "10.0.0.2", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "4420", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:20.897 "hdgst": false, 00:18:20.897 "ddgst": false 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 },{ 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme7", 00:18:20.897 "trtype": "tcp", 00:18:20.897 "traddr": "10.0.0.2", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "4420", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:20.897 "hdgst": false, 00:18:20.897 "ddgst": false 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 },{ 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme8", 00:18:20.897 "trtype": "tcp", 00:18:20.897 "traddr": "10.0.0.2", 00:18:20.897 "adrfam": "ipv4", 00:18:20.897 "trsvcid": "4420", 00:18:20.897 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:20.897 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:20.897 "hdgst": false, 00:18:20.897 "ddgst": false 00:18:20.897 }, 00:18:20.897 "method": "bdev_nvme_attach_controller" 00:18:20.897 },{ 00:18:20.897 "params": { 00:18:20.897 "name": "Nvme9", 00:18:20.897 "trtype": "tcp", 00:18:20.897 "traddr": "10.0.0.2", 00:18:20.898 "adrfam": "ipv4", 00:18:20.898 "trsvcid": "4420", 00:18:20.898 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:20.898 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:20.898 "hdgst": false, 00:18:20.898 "ddgst": false 00:18:20.898 }, 00:18:20.898 "method": "bdev_nvme_attach_controller" 00:18:20.898 },{ 00:18:20.898 "params": { 00:18:20.898 "name": "Nvme10", 00:18:20.898 "trtype": "tcp", 00:18:20.898 "traddr": "10.0.0.2", 00:18:20.898 "adrfam": "ipv4", 00:18:20.898 "trsvcid": "4420", 00:18:20.898 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:20.898 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:20.898 "hdgst": false, 00:18:20.898 "ddgst": false 00:18:20.898 }, 00:18:20.898 "method": "bdev_nvme_attach_controller" 00:18:20.898 }' 00:18:20.898 [2024-05-15 00:55:07.902500] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:20.898 [2024-05-15 00:55:07.902585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:20.898 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.156 [2024-05-15 00:55:07.964350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.156 [2024-05-15 00:55:08.081220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.056 00:55:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:23.056 00:55:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:18:23.056 00:55:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:23.056 00:55:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.056 00:55:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:23.056 00:55:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.056 00:55:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4038093 00:18:23.056 00:55:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:18:23.056 00:55:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:18:23.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4038093 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4038006 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.989 { 00:18:23.989 "params": { 00:18:23.989 "name": "Nvme$subsystem", 00:18:23.989 "trtype": "$TEST_TRANSPORT", 00:18:23.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.989 "adrfam": "ipv4", 00:18:23.989 "trsvcid": "$NVMF_PORT", 00:18:23.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.989 "hdgst": ${hdgst:-false}, 00:18:23.989 "ddgst": ${ddgst:-false} 00:18:23.989 }, 00:18:23.989 "method": "bdev_nvme_attach_controller" 00:18:23.989 } 00:18:23.989 EOF 00:18:23.989 )") 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.989 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.989 { 00:18:23.989 "params": { 00:18:23.989 "name": "Nvme$subsystem", 00:18:23.989 "trtype": "$TEST_TRANSPORT", 00:18:23.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.989 "adrfam": "ipv4", 00:18:23.989 "trsvcid": "$NVMF_PORT", 00:18:23.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.989 "hdgst": ${hdgst:-false}, 00:18:23.989 "ddgst": ${ddgst:-false} 00:18:23.989 }, 00:18:23.989 "method": "bdev_nvme_attach_controller" 00:18:23.989 } 00:18:23.990 EOF 00:18:23.990 )") 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.990 { 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme$subsystem", 00:18:23.990 "trtype": "$TEST_TRANSPORT", 00:18:23.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "$NVMF_PORT", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.990 "hdgst": ${hdgst:-false}, 00:18:23.990 "ddgst": ${ddgst:-false} 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 } 00:18:23.990 EOF 00:18:23.990 )") 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.990 { 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme$subsystem", 00:18:23.990 "trtype": "$TEST_TRANSPORT", 00:18:23.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "$NVMF_PORT", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.990 "hdgst": ${hdgst:-false}, 00:18:23.990 "ddgst": ${ddgst:-false} 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 } 00:18:23.990 EOF 00:18:23.990 )") 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.990 { 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme$subsystem", 00:18:23.990 "trtype": "$TEST_TRANSPORT", 00:18:23.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "$NVMF_PORT", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.990 "hdgst": ${hdgst:-false}, 00:18:23.990 "ddgst": ${ddgst:-false} 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 } 00:18:23.990 EOF 00:18:23.990 )") 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.990 { 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme$subsystem", 00:18:23.990 "trtype": "$TEST_TRANSPORT", 00:18:23.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "$NVMF_PORT", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.990 "hdgst": ${hdgst:-false}, 00:18:23.990 "ddgst": ${ddgst:-false} 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 } 00:18:23.990 EOF 00:18:23.990 )") 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.990 { 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme$subsystem", 00:18:23.990 "trtype": "$TEST_TRANSPORT", 00:18:23.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "$NVMF_PORT", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.990 "hdgst": ${hdgst:-false}, 00:18:23.990 "ddgst": ${ddgst:-false} 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 } 00:18:23.990 EOF 00:18:23.990 )") 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.990 { 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme$subsystem", 00:18:23.990 "trtype": "$TEST_TRANSPORT", 00:18:23.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "$NVMF_PORT", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.990 "hdgst": ${hdgst:-false}, 00:18:23.990 "ddgst": ${ddgst:-false} 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 } 00:18:23.990 EOF 00:18:23.990 )") 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.990 { 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme$subsystem", 00:18:23.990 "trtype": "$TEST_TRANSPORT", 00:18:23.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "$NVMF_PORT", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.990 "hdgst": ${hdgst:-false}, 00:18:23.990 "ddgst": ${ddgst:-false} 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 } 00:18:23.990 EOF 00:18:23.990 )") 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.990 { 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme$subsystem", 00:18:23.990 "trtype": "$TEST_TRANSPORT", 00:18:23.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "$NVMF_PORT", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.990 "hdgst": ${hdgst:-false}, 00:18:23.990 "ddgst": ${ddgst:-false} 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 } 00:18:23.990 EOF 00:18:23.990 )") 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:18:23.990 00:55:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme1", 00:18:23.990 "trtype": "tcp", 00:18:23.990 "traddr": "10.0.0.2", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "4420", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.990 "hdgst": false, 00:18:23.990 "ddgst": false 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 },{ 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme2", 00:18:23.990 "trtype": "tcp", 00:18:23.990 "traddr": "10.0.0.2", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "4420", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:23.990 "hdgst": false, 00:18:23.990 "ddgst": false 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 },{ 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme3", 00:18:23.990 "trtype": "tcp", 00:18:23.990 "traddr": "10.0.0.2", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "4420", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:23.990 "hdgst": false, 00:18:23.990 "ddgst": false 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 },{ 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme4", 00:18:23.990 "trtype": "tcp", 00:18:23.990 "traddr": "10.0.0.2", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "4420", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:23.990 "hdgst": false, 00:18:23.990 "ddgst": false 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 },{ 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme5", 00:18:23.990 "trtype": "tcp", 00:18:23.990 "traddr": "10.0.0.2", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "4420", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:23.990 "hdgst": false, 00:18:23.990 "ddgst": false 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.990 },{ 00:18:23.990 "params": { 00:18:23.990 "name": "Nvme6", 00:18:23.990 "trtype": "tcp", 00:18:23.990 "traddr": "10.0.0.2", 00:18:23.990 "adrfam": "ipv4", 00:18:23.990 "trsvcid": "4420", 00:18:23.990 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:23.990 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:23.990 "hdgst": false, 00:18:23.990 "ddgst": false 00:18:23.990 }, 00:18:23.990 "method": "bdev_nvme_attach_controller" 00:18:23.991 },{ 00:18:23.991 "params": { 00:18:23.991 "name": "Nvme7", 00:18:23.991 "trtype": "tcp", 00:18:23.991 "traddr": "10.0.0.2", 00:18:23.991 "adrfam": "ipv4", 00:18:23.991 "trsvcid": "4420", 00:18:23.991 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:23.991 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:23.991 "hdgst": false, 00:18:23.991 "ddgst": false 00:18:23.991 }, 00:18:23.991 "method": "bdev_nvme_attach_controller" 00:18:23.991 },{ 00:18:23.991 "params": { 00:18:23.991 "name": "Nvme8", 00:18:23.991 "trtype": "tcp", 00:18:23.991 "traddr": "10.0.0.2", 00:18:23.991 "adrfam": "ipv4", 00:18:23.991 "trsvcid": "4420", 00:18:23.991 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:23.991 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:23.991 "hdgst": false, 00:18:23.991 "ddgst": false 00:18:23.991 }, 00:18:23.991 "method": "bdev_nvme_attach_controller" 00:18:23.991 },{ 00:18:23.991 "params": { 00:18:23.991 "name": "Nvme9", 00:18:23.991 "trtype": "tcp", 00:18:23.991 "traddr": "10.0.0.2", 00:18:23.991 "adrfam": "ipv4", 00:18:23.991 "trsvcid": "4420", 00:18:23.991 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:23.991 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:23.991 "hdgst": false, 00:18:23.991 "ddgst": false 00:18:23.991 }, 00:18:23.991 "method": "bdev_nvme_attach_controller" 00:18:23.991 },{ 00:18:23.991 "params": { 00:18:23.991 "name": "Nvme10", 00:18:23.991 "trtype": "tcp", 00:18:23.991 "traddr": "10.0.0.2", 00:18:23.991 "adrfam": "ipv4", 00:18:23.991 "trsvcid": "4420", 00:18:23.991 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:23.991 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:23.991 "hdgst": false, 00:18:23.991 "ddgst": false 00:18:23.991 }, 00:18:23.991 "method": "bdev_nvme_attach_controller" 00:18:23.991 }' 00:18:23.991 [2024-05-15 00:55:10.979259] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:23.991 [2024-05-15 00:55:10.979350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038414 ] 00:18:23.991 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.991 [2024-05-15 00:55:11.042373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.249 [2024-05-15 00:55:11.159211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.716 Running I/O for 1 seconds... 00:18:27.094 00:18:27.094 Latency(us) 00:18:27.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.094 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme1n1 : 1.12 175.45 10.97 0.00 0.00 356647.68 21942.42 329330.54 00:18:27.094 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme2n1 : 1.08 177.36 11.09 0.00 0.00 346146.64 23981.32 330883.98 00:18:27.094 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme3n1 : 1.23 208.32 13.02 0.00 0.00 292379.88 24466.77 320009.86 00:18:27.094 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme4n1 : 1.13 170.02 10.63 0.00 0.00 348426.75 23107.51 327777.09 00:18:27.094 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme5n1 : 1.14 167.84 10.49 0.00 0.00 346499.86 27379.48 327777.09 00:18:27.094 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme6n1 : 1.22 157.01 9.81 0.00 0.00 364030.23 20486.07 369720.13 00:18:27.094 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme7n1 : 1.23 207.48 12.97 0.00 0.00 270789.03 19029.71 312242.63 00:18:27.094 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme8n1 : 1.24 206.52 12.91 0.00 0.00 266312.63 14175.19 318456.41 00:18:27.094 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme9n1 : 1.21 164.24 10.26 0.00 0.00 324441.01 5801.15 346418.44 00:18:27.094 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:27.094 Verification LBA range: start 0x0 length 0x400 00:18:27.094 Nvme10n1 : 1.24 205.80 12.86 0.00 0.00 256378.88 22136.60 332437.43 00:18:27.094 =================================================================================================================== 00:18:27.094 Total : 1840.03 115.00 0.00 0.00 311966.86 5801.15 369720.13 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:27.094 rmmod nvme_tcp 00:18:27.094 rmmod nvme_fabrics 00:18:27.094 rmmod nvme_keyring 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4038006 ']' 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4038006 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 4038006 ']' 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 4038006 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4038006 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:27.094 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:27.095 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4038006' 00:18:27.095 killing process with pid 4038006 00:18:27.095 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 4038006 00:18:27.095 [2024-05-15 00:55:14.110383] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:27.095 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 4038006 00:18:27.664 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:27.664 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:27.664 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:27.664 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:27.664 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:27.664 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.664 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.664 00:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:29.571 00:18:29.571 real 0m11.341s 00:18:29.571 user 0m33.918s 00:18:29.571 sys 0m2.820s 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:29.571 ************************************ 00:18:29.571 END TEST nvmf_shutdown_tc1 00:18:29.571 ************************************ 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:29.571 ************************************ 00:18:29.571 START TEST nvmf_shutdown_tc2 00:18:29.571 ************************************ 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:29.571 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:29.831 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:29.831 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:29.831 Found net devices under 0000:08:00.0: cvl_0_0 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:29.831 Found net devices under 0000:08:00.1: cvl_0_1 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:29.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:18:29.831 00:18:29.831 --- 10.0.0.2 ping statistics --- 00:18:29.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.831 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:29.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:18:29.831 00:18:29.831 --- 10.0.0.1 ping statistics --- 00:18:29.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.831 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4039043 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4039043 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4039043 ']' 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:29.831 00:55:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:29.832 [2024-05-15 00:55:16.829581] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:29.832 [2024-05-15 00:55:16.829680] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.832 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.090 [2024-05-15 00:55:16.902255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:30.090 [2024-05-15 00:55:17.022063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.090 [2024-05-15 00:55:17.022118] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.090 [2024-05-15 00:55:17.022134] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.090 [2024-05-15 00:55:17.022147] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.090 [2024-05-15 00:55:17.022159] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.090 [2024-05-15 00:55:17.022215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.090 [2024-05-15 00:55:17.022267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:30.090 [2024-05-15 00:55:17.022318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:30.090 [2024-05-15 00:55:17.022321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.090 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.090 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:18:30.090 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.090 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.090 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:30.349 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.349 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:30.349 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.349 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:30.349 [2024-05-15 00:55:17.159502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.349 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.349 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:30.349 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.350 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:30.350 Malloc1 00:18:30.350 [2024-05-15 00:55:17.237276] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:30.350 [2024-05-15 00:55:17.237575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.350 Malloc2 00:18:30.350 Malloc3 00:18:30.350 Malloc4 00:18:30.350 Malloc5 00:18:30.608 Malloc6 00:18:30.608 Malloc7 00:18:30.608 Malloc8 00:18:30.608 Malloc9 00:18:30.608 Malloc10 00:18:30.608 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.608 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:30.608 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.608 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:30.867 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4039175 00:18:30.867 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4039175 /var/tmp/bdevperf.sock 00:18:30.867 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4039175 ']' 00:18:30.867 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.867 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:30.867 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:18:30.868 00:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme1", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 },{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme2", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 },{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme3", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 },{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme4", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 },{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme5", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 },{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme6", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 },{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme7", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 },{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme8", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 },{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme9", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 },{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme10", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 }' 00:18:30.868 [2024-05-15 00:55:17.728897] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:30.869 [2024-05-15 00:55:17.728993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039175 ] 00:18:30.869 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.869 [2024-05-15 00:55:17.790558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.869 [2024-05-15 00:55:17.907194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.771 Running I/O for 10 seconds... 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:32.771 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.030 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:18:33.030 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:18:33.030 00:55:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:18:33.289 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:18:33.547 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4039175 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 4039175 ']' 00:18:33.806 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 4039175 00:18:33.807 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:18:33.807 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:33.807 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4039175 00:18:33.807 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:33.807 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:33.807 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4039175' 00:18:33.807 killing process with pid 4039175 00:18:33.807 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 4039175 00:18:33.807 00:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 4039175 00:18:33.807 Received shutdown signal, test time was about 1.312923 seconds 00:18:33.807 00:18:33.807 Latency(us) 00:18:33.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.807 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme1n1 : 1.28 149.78 9.36 0.00 0.00 421406.59 51263.72 344865.00 00:18:33.807 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme2n1 : 1.28 200.08 12.51 0.00 0.00 310745.32 20097.71 323116.75 00:18:33.807 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme3n1 : 1.30 196.55 12.28 0.00 0.00 310633.43 22816.24 347971.89 00:18:33.807 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme4n1 : 1.29 198.17 12.39 0.00 0.00 302188.09 21748.24 327777.09 00:18:33.807 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme5n1 : 1.29 149.20 9.32 0.00 0.00 393931.03 52817.16 361952.90 00:18:33.807 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme6n1 : 1.30 200.96 12.56 0.00 0.00 286378.04 3446.71 332437.43 00:18:33.807 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme7n1 : 1.31 195.12 12.19 0.00 0.00 290481.49 21942.42 343311.55 00:18:33.807 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme8n1 : 1.31 198.94 12.43 0.00 0.00 279196.90 2390.85 335544.32 00:18:33.807 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme9n1 : 1.28 153.80 9.61 0.00 0.00 350483.05 11456.66 380594.25 00:18:33.807 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:33.807 Verification LBA range: start 0x0 length 0x400 00:18:33.807 Nvme10n1 : 1.26 151.86 9.49 0.00 0.00 349147.09 25243.50 346418.44 00:18:33.807 =================================================================================================================== 00:18:33.807 Total : 1794.47 112.15 0.00 0.00 323881.99 2390.85 380594.25 00:18:34.065 00:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4039043 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.440 rmmod nvme_tcp 00:18:35.440 rmmod nvme_fabrics 00:18:35.440 rmmod nvme_keyring 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4039043 ']' 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4039043 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 4039043 ']' 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 4039043 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4039043 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4039043' 00:18:35.440 killing process with pid 4039043 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 4039043 00:18:35.440 [2024-05-15 00:55:22.191593] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:35.440 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 4039043 00:18:35.699 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.699 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.699 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.699 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.699 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.699 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.699 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.699 00:55:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.607 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.607 00:18:37.607 real 0m8.007s 00:18:37.607 user 0m24.533s 00:18:37.607 sys 0m1.646s 00:18:37.607 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:37.607 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:37.607 ************************************ 00:18:37.607 END TEST nvmf_shutdown_tc2 00:18:37.607 ************************************ 00:18:37.607 00:55:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:18:37.607 00:55:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:37.607 00:55:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:37.607 00:55:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:37.867 ************************************ 00:18:37.867 START TEST nvmf_shutdown_tc3 00:18:37.867 ************************************ 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.867 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:37.868 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:37.868 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:37.868 Found net devices under 0000:08:00.0: cvl_0_0 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:37.868 Found net devices under 0000:08:00.1: cvl_0_1 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:37.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:18:37.868 00:18:37.868 --- 10.0.0.2 ping statistics --- 00:18:37.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.868 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:18:37.868 00:18:37.868 --- 10.0.0.1 ping statistics --- 00:18:37.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.868 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4039993 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4039993 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 4039993 ']' 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:37.868 00:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:37.868 [2024-05-15 00:55:24.910253] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:37.868 [2024-05-15 00:55:24.910352] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.127 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.127 [2024-05-15 00:55:24.990617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.127 [2024-05-15 00:55:25.142729] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.127 [2024-05-15 00:55:25.142787] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.127 [2024-05-15 00:55:25.142803] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.127 [2024-05-15 00:55:25.142817] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.127 [2024-05-15 00:55:25.142829] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.127 [2024-05-15 00:55:25.142912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.127 [2024-05-15 00:55:25.142964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:38.127 [2024-05-15 00:55:25.143056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:38.127 [2024-05-15 00:55:25.143060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:38.386 [2024-05-15 00:55:25.300664] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.386 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:38.386 Malloc1 00:18:38.386 [2024-05-15 00:55:25.390978] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:38.386 [2024-05-15 00:55:25.391371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.386 Malloc2 00:18:38.645 Malloc3 00:18:38.645 Malloc4 00:18:38.645 Malloc5 00:18:38.645 Malloc6 00:18:38.645 Malloc7 00:18:38.645 Malloc8 00:18:38.904 Malloc9 00:18:38.904 Malloc10 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4040141 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4040141 /var/tmp/bdevperf.sock 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 4040141 ']' 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.904 { 00:18:38.904 "params": { 00:18:38.904 "name": "Nvme$subsystem", 00:18:38.904 "trtype": "$TEST_TRANSPORT", 00:18:38.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.904 "adrfam": "ipv4", 00:18:38.904 "trsvcid": "$NVMF_PORT", 00:18:38.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.904 "hdgst": ${hdgst:-false}, 00:18:38.904 "ddgst": ${ddgst:-false} 00:18:38.904 }, 00:18:38.904 "method": "bdev_nvme_attach_controller" 00:18:38.904 } 00:18:38.904 EOF 00:18:38.904 )") 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.904 { 00:18:38.904 "params": { 00:18:38.904 "name": "Nvme$subsystem", 00:18:38.904 "trtype": "$TEST_TRANSPORT", 00:18:38.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.904 "adrfam": "ipv4", 00:18:38.904 "trsvcid": "$NVMF_PORT", 00:18:38.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.904 "hdgst": ${hdgst:-false}, 00:18:38.904 "ddgst": ${ddgst:-false} 00:18:38.904 }, 00:18:38.904 "method": "bdev_nvme_attach_controller" 00:18:38.904 } 00:18:38.904 EOF 00:18:38.904 )") 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.904 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.904 { 00:18:38.904 "params": { 00:18:38.904 "name": "Nvme$subsystem", 00:18:38.904 "trtype": "$TEST_TRANSPORT", 00:18:38.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.904 "adrfam": "ipv4", 00:18:38.904 "trsvcid": "$NVMF_PORT", 00:18:38.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.905 "hdgst": ${hdgst:-false}, 00:18:38.905 "ddgst": ${ddgst:-false} 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 } 00:18:38.905 EOF 00:18:38.905 )") 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.905 { 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme$subsystem", 00:18:38.905 "trtype": "$TEST_TRANSPORT", 00:18:38.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "$NVMF_PORT", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.905 "hdgst": ${hdgst:-false}, 00:18:38.905 "ddgst": ${ddgst:-false} 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 } 00:18:38.905 EOF 00:18:38.905 )") 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.905 { 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme$subsystem", 00:18:38.905 "trtype": "$TEST_TRANSPORT", 00:18:38.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "$NVMF_PORT", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.905 "hdgst": ${hdgst:-false}, 00:18:38.905 "ddgst": ${ddgst:-false} 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 } 00:18:38.905 EOF 00:18:38.905 )") 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.905 { 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme$subsystem", 00:18:38.905 "trtype": "$TEST_TRANSPORT", 00:18:38.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "$NVMF_PORT", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.905 "hdgst": ${hdgst:-false}, 00:18:38.905 "ddgst": ${ddgst:-false} 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 } 00:18:38.905 EOF 00:18:38.905 )") 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.905 { 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme$subsystem", 00:18:38.905 "trtype": "$TEST_TRANSPORT", 00:18:38.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "$NVMF_PORT", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.905 "hdgst": ${hdgst:-false}, 00:18:38.905 "ddgst": ${ddgst:-false} 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 } 00:18:38.905 EOF 00:18:38.905 )") 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.905 { 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme$subsystem", 00:18:38.905 "trtype": "$TEST_TRANSPORT", 00:18:38.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "$NVMF_PORT", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.905 "hdgst": ${hdgst:-false}, 00:18:38.905 "ddgst": ${ddgst:-false} 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 } 00:18:38.905 EOF 00:18:38.905 )") 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.905 { 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme$subsystem", 00:18:38.905 "trtype": "$TEST_TRANSPORT", 00:18:38.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "$NVMF_PORT", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.905 "hdgst": ${hdgst:-false}, 00:18:38.905 "ddgst": ${ddgst:-false} 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 } 00:18:38.905 EOF 00:18:38.905 )") 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.905 { 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme$subsystem", 00:18:38.905 "trtype": "$TEST_TRANSPORT", 00:18:38.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "$NVMF_PORT", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.905 "hdgst": ${hdgst:-false}, 00:18:38.905 "ddgst": ${ddgst:-false} 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 } 00:18:38.905 EOF 00:18:38.905 )") 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:18:38.905 00:55:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme1", 00:18:38.905 "trtype": "tcp", 00:18:38.905 "traddr": "10.0.0.2", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "4420", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.905 "hdgst": false, 00:18:38.905 "ddgst": false 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 },{ 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme2", 00:18:38.905 "trtype": "tcp", 00:18:38.905 "traddr": "10.0.0.2", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "4420", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:38.905 "hdgst": false, 00:18:38.905 "ddgst": false 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 },{ 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme3", 00:18:38.905 "trtype": "tcp", 00:18:38.905 "traddr": "10.0.0.2", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "4420", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:38.905 "hdgst": false, 00:18:38.905 "ddgst": false 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 },{ 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme4", 00:18:38.905 "trtype": "tcp", 00:18:38.905 "traddr": "10.0.0.2", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "4420", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:38.905 "hdgst": false, 00:18:38.905 "ddgst": false 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 },{ 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme5", 00:18:38.905 "trtype": "tcp", 00:18:38.905 "traddr": "10.0.0.2", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "4420", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:38.905 "hdgst": false, 00:18:38.905 "ddgst": false 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 },{ 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme6", 00:18:38.905 "trtype": "tcp", 00:18:38.905 "traddr": "10.0.0.2", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "4420", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:38.905 "hdgst": false, 00:18:38.905 "ddgst": false 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 },{ 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme7", 00:18:38.905 "trtype": "tcp", 00:18:38.905 "traddr": "10.0.0.2", 00:18:38.905 "adrfam": "ipv4", 00:18:38.905 "trsvcid": "4420", 00:18:38.905 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:38.905 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:38.905 "hdgst": false, 00:18:38.905 "ddgst": false 00:18:38.905 }, 00:18:38.905 "method": "bdev_nvme_attach_controller" 00:18:38.905 },{ 00:18:38.905 "params": { 00:18:38.905 "name": "Nvme8", 00:18:38.905 "trtype": "tcp", 00:18:38.905 "traddr": "10.0.0.2", 00:18:38.906 "adrfam": "ipv4", 00:18:38.906 "trsvcid": "4420", 00:18:38.906 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:38.906 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:38.906 "hdgst": false, 00:18:38.906 "ddgst": false 00:18:38.906 }, 00:18:38.906 "method": "bdev_nvme_attach_controller" 00:18:38.906 },{ 00:18:38.906 "params": { 00:18:38.906 "name": "Nvme9", 00:18:38.906 "trtype": "tcp", 00:18:38.906 "traddr": "10.0.0.2", 00:18:38.906 "adrfam": "ipv4", 00:18:38.906 "trsvcid": "4420", 00:18:38.906 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:38.906 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:38.906 "hdgst": false, 00:18:38.906 "ddgst": false 00:18:38.906 }, 00:18:38.906 "method": "bdev_nvme_attach_controller" 00:18:38.906 },{ 00:18:38.906 "params": { 00:18:38.906 "name": "Nvme10", 00:18:38.906 "trtype": "tcp", 00:18:38.906 "traddr": "10.0.0.2", 00:18:38.906 "adrfam": "ipv4", 00:18:38.906 "trsvcid": "4420", 00:18:38.906 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:38.906 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:38.906 "hdgst": false, 00:18:38.906 "ddgst": false 00:18:38.906 }, 00:18:38.906 "method": "bdev_nvme_attach_controller" 00:18:38.906 }' 00:18:38.906 [2024-05-15 00:55:25.869431] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:38.906 [2024-05-15 00:55:25.869519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040141 ] 00:18:38.906 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.906 [2024-05-15 00:55:25.931271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.164 [2024-05-15 00:55:26.048243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.067 Running I/O for 10 seconds... 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:18:41.326 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:18:41.584 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:18:41.842 00:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4039993 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 4039993 ']' 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 4039993 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4039993 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4039993' 00:18:42.104 killing process with pid 4039993 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 4039993 00:18:42.104 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 4039993 00:18:42.104 [2024-05-15 00:55:29.132463] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:42.104 [2024-05-15 00:55:29.140126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.104 [2024-05-15 00:55:29.140383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.140926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.141037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.141066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.141081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.141094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.141108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.141121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.141135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.141148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.141161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3a90 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.144982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.145359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3f30 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.147858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.147893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.147908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.147922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.147965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.147984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.148003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.148016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.148029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.105 [2024-05-15 00:55:29.148042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.148785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4870 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.149994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.150556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4d10 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.151587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.151615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.151629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.151642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.151656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.151669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.151684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.106 [2024-05-15 00:55:29.151697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.151986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.152468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe51b0 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.153992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.107 [2024-05-15 00:55:29.154395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.154410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.154424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.154438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.154451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.154465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.154479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.154496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168c10 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.156993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.108 [2024-05-15 00:55:29.157007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169420 is same with the state(5) to be set 00:18:42.377 [2024-05-15 00:55:29.158995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.377 [2024-05-15 00:55:29.159939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.377 [2024-05-15 00:55:29.159958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.159986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.160961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.160985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.378 [2024-05-15 00:55:29.161382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.378 [2024-05-15 00:55:29.161450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:18:42.378 [2024-05-15 00:55:29.161528] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2309f90 was disconnected and freed. reset controller. 00:18:42.379 [2024-05-15 00:55:29.162274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235cf40 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.162461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2339660 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.162643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453770 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.162816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.162963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.162986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233a1e0 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.163060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6d730 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.163236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f220 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.163405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9620 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.163594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2339cf0 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.163764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.163902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2367500 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.163963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.163993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.164008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.164022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.164046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.164061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.164076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.379 [2024-05-15 00:55:29.164095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.164109] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370f20 is same with the state(5) to be set 00:18:42.379 [2024-05-15 00:55:29.164350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.379 [2024-05-15 00:55:29.164376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.164405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.379 [2024-05-15 00:55:29.164421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.164439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.379 [2024-05-15 00:55:29.164454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.164471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.379 [2024-05-15 00:55:29.164486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.379 [2024-05-15 00:55:29.164503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.379 [2024-05-15 00:55:29.164517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.164976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.164993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.380 [2024-05-15 00:55:29.165950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.380 [2024-05-15 00:55:29.165966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.165996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.166488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.166607] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x244d7b0 was disconnected and freed. reset controller. 00:18:42.381 [2024-05-15 00:55:29.167422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.167463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.167497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.167514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.167531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.167546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.167563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.167578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.167595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.167609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.167625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.167640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.381 [2024-05-15 00:55:29.167657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.381 [2024-05-15 00:55:29.167672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.167688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.167703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.167720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.167734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.167751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.167773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.167790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.167805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.167822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.167837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.167854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.167869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.167890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.167917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.167952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.167969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.167993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.382 [2024-05-15 00:55:29.168801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.382 [2024-05-15 00:55:29.168816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.168833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.168848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.168865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.168879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.168900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.168925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.168964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.168998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.169695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.169825] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2308a90 was disconnected and freed. reset controller. 00:18:42.383 [2024-05-15 00:55:29.171383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.171425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.171454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.171470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.171488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.171503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.171520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.171535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.171551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.171566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.171583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.171598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.171614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.171629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.171646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.383 [2024-05-15 00:55:29.171669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.383 [2024-05-15 00:55:29.171687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.171718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.171750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.171782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.171814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.171845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.171877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.171908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.171948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.171981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.171996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.384 [2024-05-15 00:55:29.172691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.384 [2024-05-15 00:55:29.172705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.172723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.172738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.172756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.172771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.172788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.172802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.172820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.172836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.172854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.172869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.172890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.172906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.172923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.172951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.172969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.172984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.385 [2024-05-15 00:55:29.173507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.385 [2024-05-15 00:55:29.173985] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24ca880 was disconnected and freed. reset controller. 00:18:42.385 [2024-05-15 00:55:29.174104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235cf40 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.174143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2339660 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.174176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453770 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.174202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233a1e0 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.174226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6d730 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.174255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230f220 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.174286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d9620 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.174315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2339cf0 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.174344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2367500 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.174369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370f20 (9): Bad file descriptor 00:18:42.385 [2024-05-15 00:55:29.178968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:18:42.385 [2024-05-15 00:55:29.180045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:18:42.385 [2024-05-15 00:55:29.180085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:18:42.385 [2024-05-15 00:55:29.180322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.385 [2024-05-15 00:55:29.180577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.385 [2024-05-15 00:55:29.180605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235cf40 with addr=10.0.0.2, port=4420 00:18:42.385 [2024-05-15 00:55:29.180625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235cf40 is same with the state(5) to be set 00:18:42.385 [2024-05-15 00:55:29.181161] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:42.385 [2024-05-15 00:55:29.181252] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:42.385 [2024-05-15 00:55:29.181981] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:42.385 [2024-05-15 00:55:29.182315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:18:42.385 [2024-05-15 00:55:29.182570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.385 [2024-05-15 00:55:29.182754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.385 [2024-05-15 00:55:29.182781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x233a1e0 with addr=10.0.0.2, port=4420 00:18:42.385 [2024-05-15 00:55:29.182800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233a1e0 is same with the state(5) to be set 00:18:42.385 [2024-05-15 00:55:29.182998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.385 [2024-05-15 00:55:29.183164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.385 [2024-05-15 00:55:29.183194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453770 with addr=10.0.0.2, port=4420 00:18:42.385 [2024-05-15 00:55:29.183212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453770 is same with the state(5) to be set 00:18:42.385 [2024-05-15 00:55:29.183239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235cf40 (9): Bad file descriptor 00:18:42.386 [2024-05-15 00:55:29.183326] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:42.386 [2024-05-15 00:55:29.183408] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:42.386 [2024-05-15 00:55:29.183504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.183977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.183991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.184979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.184996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.185011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.185028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.386 [2024-05-15 00:55:29.185044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.386 [2024-05-15 00:55:29.185061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.185630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.185647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244ecf0 is same with the state(5) to be set 00:18:42.387 [2024-05-15 00:55:29.185743] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x244ecf0 was disconnected and freed. reset controller. 00:18:42.387 [2024-05-15 00:55:29.186225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.387 [2024-05-15 00:55:29.186395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.387 [2024-05-15 00:55:29.186420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2367500 with addr=10.0.0.2, port=4420 00:18:42.387 [2024-05-15 00:55:29.186439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2367500 is same with the state(5) to be set 00:18:42.387 [2024-05-15 00:55:29.186469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233a1e0 (9): Bad file descriptor 00:18:42.387 [2024-05-15 00:55:29.186492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453770 (9): Bad file descriptor 00:18:42.387 [2024-05-15 00:55:29.186510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:18:42.387 [2024-05-15 00:55:29.186525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:18:42.387 [2024-05-15 00:55:29.186550] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:18:42.387 [2024-05-15 00:55:29.186653] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:42.387 [2024-05-15 00:55:29.188152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.387 [2024-05-15 00:55:29.188218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:18:42.387 [2024-05-15 00:55:29.188264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2367500 (9): Bad file descriptor 00:18:42.387 [2024-05-15 00:55:29.188287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:18:42.387 [2024-05-15 00:55:29.188302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:18:42.387 [2024-05-15 00:55:29.188316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:18:42.387 [2024-05-15 00:55:29.188337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:18:42.387 [2024-05-15 00:55:29.188351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:18:42.387 [2024-05-15 00:55:29.188365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:18:42.387 [2024-05-15 00:55:29.188449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.188977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.188992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.387 [2024-05-15 00:55:29.189341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.387 [2024-05-15 00:55:29.189356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.189970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.189990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.190563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.190580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1e90 is same with the state(5) to be set 00:18:42.388 [2024-05-15 00:55:29.192125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.192959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.192984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.193000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.193017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.193031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.388 [2024-05-15 00:55:29.193048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.388 [2024-05-15 00:55:29.193063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.193967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.193987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.194003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.194019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.194034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.194051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.194065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.194082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.194096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.194113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.194127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.194144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.194159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.194176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.194190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.194207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.194222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.194238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d30b0 is same with the state(5) to be set 00:18:42.389 [2024-05-15 00:55:29.195796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.195833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.195860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.195877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.195894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.195909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.195926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.195947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.195972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.195988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.389 [2024-05-15 00:55:29.196894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.389 [2024-05-15 00:55:29.196909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.196926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.196947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.196965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.196980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.196996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.197882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.197899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24501a0 is same with the state(5) to be set 00:18:42.390 [2024-05-15 00:55:29.199457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.199976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.199998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.200970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.390 [2024-05-15 00:55:29.200985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.390 [2024-05-15 00:55:29.201003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.201557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.201575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2307590 is same with the state(5) to be set 00:18:42.391 [2024-05-15 00:55:29.203095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.203978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.203994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.391 [2024-05-15 00:55:29.204732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.391 [2024-05-15 00:55:29.204747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.204765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.204783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.204801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.204816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.204835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.204850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.204867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.204882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.204899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.204914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.204939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.204956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.204974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.204989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.205006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.205021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.205038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.205053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.205070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.205085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.205103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.205117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.205135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.205151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.205168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.205183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.205206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.392 [2024-05-15 00:55:29.205225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.392 [2024-05-15 00:55:29.205242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230b3c0 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.207364] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.207406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.207426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.392 [2024-05-15 00:55:29.207452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:18:42.392 [2024-05-15 00:55:29.207471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:18:42.392 [2024-05-15 00:55:29.207777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.207967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.207995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2339cf0 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.208015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2339cf0 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.208034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.208049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.208067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:42.392 [2024-05-15 00:55:29.208155] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:42.392 [2024-05-15 00:55:29.208181] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:42.392 [2024-05-15 00:55:29.208207] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:42.392 [2024-05-15 00:55:29.208237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2339cf0 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.208652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:18:42.392 task offset: 25472 on job bdev=Nvme8n1 fails 00:18:42.392 00:18:42.392 Latency(us) 00:18:42.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.392 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme1n1 ended in about 1.30 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme1n1 : 1.30 98.73 6.17 49.36 0.00 427968.28 34758.35 382147.70 00:18:42.392 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme2n1 ended in about 1.30 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme2n1 : 1.30 147.68 9.23 49.23 0.00 316123.78 19320.98 332437.43 00:18:42.392 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme3n1 ended in about 1.28 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme3n1 : 1.28 149.95 9.37 49.98 0.00 305478.16 16699.54 341758.10 00:18:42.392 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme4n1 ended in about 1.29 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme4n1 : 1.29 148.54 9.28 49.51 0.00 302860.52 48933.55 312242.63 00:18:42.392 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme5n1 ended in about 1.30 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme5n1 : 1.30 98.18 6.14 49.09 0.00 400106.38 28544.57 403895.94 00:18:42.392 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme6n1 ended in about 1.31 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme6n1 : 1.31 146.85 9.18 48.95 0.00 295214.08 23981.32 292047.83 00:18:42.392 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme7n1 ended in about 1.28 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme7n1 : 1.28 149.76 9.36 49.92 0.00 283176.96 17767.54 363506.35 00:18:42.392 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme8n1 ended in about 1.28 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme8n1 : 1.28 150.50 9.41 50.17 0.00 276007.54 11262.48 335544.32 00:18:42.392 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme9n1 ended in about 1.31 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme9n1 : 1.31 97.63 6.10 48.81 0.00 372465.97 60584.39 349525.33 00:18:42.392 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:42.392 Job: Nvme10n1 ended in about 1.28 seconds with error 00:18:42.392 Verification LBA range: start 0x0 length 0x400 00:18:42.392 Nvme10n1 : 1.28 149.59 9.35 49.86 0.00 266817.80 15728.64 329330.54 00:18:42.392 =================================================================================================================== 00:18:42.392 Total : 1337.40 83.59 494.89 0.00 318495.60 11262.48 403895.94 00:18:42.392 [2024-05-15 00:55:29.238420] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:42.392 [2024-05-15 00:55:29.238511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:18:42.392 [2024-05-15 00:55:29.238545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.238892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.239095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.239123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230f220 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.239145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f220 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.239296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.239544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.239569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d9620 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.239586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9620 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.239750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.240028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.240079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6d730 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.240099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6d730 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.241737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:18:42.392 [2024-05-15 00:55:29.241787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:18:42.392 [2024-05-15 00:55:29.241823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:18:42.392 [2024-05-15 00:55:29.242125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.242316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.242346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2339660 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.242365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2339660 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.242519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.242657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.242682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2370f20 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.242699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370f20 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.242727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230f220 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.242752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d9620 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.242772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6d730 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.242797] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.242812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.242830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:18:42.392 [2024-05-15 00:55:29.242905] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:42.392 [2024-05-15 00:55:29.242940] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:42.392 [2024-05-15 00:55:29.242965] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:42.392 [2024-05-15 00:55:29.242986] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:42.392 [2024-05-15 00:55:29.243119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.243270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.243422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.243447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235cf40 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.243464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235cf40 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.243617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.243806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.243832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2453770 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.243855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453770 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.244000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.244149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.244175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x233a1e0 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.244201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233a1e0 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.244222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2339660 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.244243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370f20 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.244261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.244275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.244289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:42.392 [2024-05-15 00:55:29.244310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.244324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.244344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:42.392 [2024-05-15 00:55:29.244362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.244375] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.244389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:18:42.392 [2024-05-15 00:55:29.244489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:18:42.392 [2024-05-15 00:55:29.244516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.244531] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.244543] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.244575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235cf40 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.244596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453770 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.244615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233a1e0 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.244632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.244645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.244659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:18:42.392 [2024-05-15 00:55:29.244676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.244696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.244710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:18:42.392 [2024-05-15 00:55:29.244754] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.244773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.244938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.245111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.392 [2024-05-15 00:55:29.245135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2367500 with addr=10.0.0.2, port=4420 00:18:42.392 [2024-05-15 00:55:29.245152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2367500 is same with the state(5) to be set 00:18:42.392 [2024-05-15 00:55:29.245173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.245188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.245202] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:18:42.392 [2024-05-15 00:55:29.245220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.245235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.245248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:18:42.392 [2024-05-15 00:55:29.245265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:18:42.392 [2024-05-15 00:55:29.245278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:18:42.392 [2024-05-15 00:55:29.245292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:18:42.392 [2024-05-15 00:55:29.245336] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.245354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.245366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.392 [2024-05-15 00:55:29.245384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2367500 (9): Bad file descriptor 00:18:42.392 [2024-05-15 00:55:29.245431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:18:42.393 [2024-05-15 00:55:29.245449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:18:42.393 [2024-05-15 00:55:29.245463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:42.393 [2024-05-15 00:55:29.245505] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.651 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:18:42.651 00:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4040141 00:18:43.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4040141) - No such process 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.586 rmmod nvme_tcp 00:18:43.586 rmmod nvme_fabrics 00:18:43.586 rmmod nvme_keyring 00:18:43.586 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.846 00:55:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.749 00:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.749 00:18:45.749 real 0m8.003s 00:18:45.749 user 0m20.623s 00:18:45.749 sys 0m1.557s 00:18:45.749 00:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.749 00:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:45.749 ************************************ 00:18:45.749 END TEST nvmf_shutdown_tc3 00:18:45.749 ************************************ 00:18:45.749 00:55:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:18:45.749 00:18:45.749 real 0m27.611s 00:18:45.749 user 1m19.172s 00:18:45.749 sys 0m6.190s 00:18:45.749 00:55:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.749 00:55:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:45.749 ************************************ 00:18:45.749 END TEST nvmf_shutdown 00:18:45.749 ************************************ 00:18:45.749 00:55:32 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:18:45.749 00:55:32 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.749 00:55:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.749 00:55:32 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:18:45.749 00:55:32 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:45.749 00:55:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.749 00:55:32 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:18:45.749 00:55:32 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:18:45.750 00:55:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:45.750 00:55:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:45.750 00:55:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.750 ************************************ 00:18:45.750 START TEST nvmf_multicontroller 00:18:45.750 ************************************ 00:18:45.750 00:55:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:18:46.007 * Looking for test storage... 00:18:46.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.007 00:55:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.008 00:55:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:47.396 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:47.396 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:47.396 Found net devices under 0000:08:00.0: cvl_0_0 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:47.396 Found net devices under 0000:08:00.1: cvl_0_1 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.396 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.397 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.397 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.397 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.397 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.397 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:18:47.655 00:18:47.655 --- 10.0.0.2 ping statistics --- 00:18:47.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.655 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:18:47.655 00:18:47.655 --- 10.0.0.1 ping statistics --- 00:18:47.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.655 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4042114 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4042114 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 4042114 ']' 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:47.655 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:47.655 [2024-05-15 00:55:34.627043] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:47.655 [2024-05-15 00:55:34.627147] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.655 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.655 [2024-05-15 00:55:34.694221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:47.914 [2024-05-15 00:55:34.813082] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.914 [2024-05-15 00:55:34.813147] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.914 [2024-05-15 00:55:34.813162] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.914 [2024-05-15 00:55:34.813176] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.914 [2024-05-15 00:55:34.813188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.914 [2024-05-15 00:55:34.813270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.914 [2024-05-15 00:55:34.813326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.914 [2024-05-15 00:55:34.813322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:47.914 [2024-05-15 00:55:34.953865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.914 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 Malloc0 00:18:48.173 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:48.173 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.173 00:55:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 [2024-05-15 00:55:35.019015] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:48.173 [2024-05-15 00:55:35.019299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 [2024-05-15 00:55:35.027163] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 Malloc1 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4042137 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4042137 /var/tmp/bdevperf.sock 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 4042137 ']' 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:48.173 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.431 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:48.431 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:18:48.431 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:18:48.431 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.431 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.689 NVMe0n1 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.689 1 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:18:48.689 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.690 request: 00:18:48.690 { 00:18:48.690 "name": "NVMe0", 00:18:48.690 "trtype": "tcp", 00:18:48.690 "traddr": "10.0.0.2", 00:18:48.690 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:18:48.690 "hostaddr": "10.0.0.2", 00:18:48.690 "hostsvcid": "60000", 00:18:48.690 "adrfam": "ipv4", 00:18:48.690 "trsvcid": "4420", 00:18:48.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.690 "method": "bdev_nvme_attach_controller", 00:18:48.690 "req_id": 1 00:18:48.690 } 00:18:48.690 Got JSON-RPC error response 00:18:48.690 response: 00:18:48.690 { 00:18:48.690 "code": -114, 00:18:48.690 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:18:48.690 } 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.690 request: 00:18:48.690 { 00:18:48.690 "name": "NVMe0", 00:18:48.690 "trtype": "tcp", 00:18:48.690 "traddr": "10.0.0.2", 00:18:48.690 "hostaddr": "10.0.0.2", 00:18:48.690 "hostsvcid": "60000", 00:18:48.690 "adrfam": "ipv4", 00:18:48.690 "trsvcid": "4420", 00:18:48.690 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:48.690 "method": "bdev_nvme_attach_controller", 00:18:48.690 "req_id": 1 00:18:48.690 } 00:18:48.690 Got JSON-RPC error response 00:18:48.690 response: 00:18:48.690 { 00:18:48.690 "code": -114, 00:18:48.690 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:18:48.690 } 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.690 request: 00:18:48.690 { 00:18:48.690 "name": "NVMe0", 00:18:48.690 "trtype": "tcp", 00:18:48.690 "traddr": "10.0.0.2", 00:18:48.690 "hostaddr": "10.0.0.2", 00:18:48.690 "hostsvcid": "60000", 00:18:48.690 "adrfam": "ipv4", 00:18:48.690 "trsvcid": "4420", 00:18:48.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.690 "multipath": "disable", 00:18:48.690 "method": "bdev_nvme_attach_controller", 00:18:48.690 "req_id": 1 00:18:48.690 } 00:18:48.690 Got JSON-RPC error response 00:18:48.690 response: 00:18:48.690 { 00:18:48.690 "code": -114, 00:18:48.690 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:18:48.690 } 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.690 request: 00:18:48.690 { 00:18:48.690 "name": "NVMe0", 00:18:48.690 "trtype": "tcp", 00:18:48.690 "traddr": "10.0.0.2", 00:18:48.690 "hostaddr": "10.0.0.2", 00:18:48.690 "hostsvcid": "60000", 00:18:48.690 "adrfam": "ipv4", 00:18:48.690 "trsvcid": "4420", 00:18:48.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.690 "multipath": "failover", 00:18:48.690 "method": "bdev_nvme_attach_controller", 00:18:48.690 "req_id": 1 00:18:48.690 } 00:18:48.690 Got JSON-RPC error response 00:18:48.690 response: 00:18:48.690 { 00:18:48.690 "code": -114, 00:18:48.690 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:18:48.690 } 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.690 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.948 00:18:48.948 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.948 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:48.948 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.948 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:48.948 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.948 00:55:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:18:48.948 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.948 00:55:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:49.207 00:18:49.207 00:55:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.207 00:55:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:49.207 00:55:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:18:49.207 00:55:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.207 00:55:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:49.207 00:55:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.207 00:55:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:18:49.207 00:55:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.158 0 00:18:50.158 00:55:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:18:50.158 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.158 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:50.158 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.158 00:55:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4042137 00:18:50.158 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 4042137 ']' 00:18:50.158 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 4042137 00:18:50.159 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:18:50.159 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:50.159 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4042137 00:18:50.159 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:50.159 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:50.159 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4042137' 00:18:50.159 killing process with pid 4042137 00:18:50.159 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 4042137 00:18:50.159 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 4042137 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:18:50.417 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:18:50.417 [2024-05-15 00:55:35.130541] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:50.417 [2024-05-15 00:55:35.130646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4042137 ] 00:18:50.417 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.417 [2024-05-15 00:55:35.191132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.417 [2024-05-15 00:55:35.307794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.417 [2024-05-15 00:55:36.015717] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 44e1dd68-6c33-4992-8afb-c5374e8f3541 already exists 00:18:50.417 [2024-05-15 00:55:36.015762] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:44e1dd68-6c33-4992-8afb-c5374e8f3541 alias for bdev NVMe1n1 00:18:50.417 [2024-05-15 00:55:36.015782] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:18:50.417 Running I/O for 1 seconds... 00:18:50.417 00:18:50.417 Latency(us) 00:18:50.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.417 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:18:50.417 NVMe0n1 : 1.00 16447.26 64.25 0.00 0.00 7768.22 6650.69 16408.27 00:18:50.417 =================================================================================================================== 00:18:50.417 Total : 16447.26 64.25 0.00 0.00 7768.22 6650.69 16408.27 00:18:50.417 Received shutdown signal, test time was about 1.000000 seconds 00:18:50.417 00:18:50.417 Latency(us) 00:18:50.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.417 =================================================================================================================== 00:18:50.417 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.417 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.417 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.417 rmmod nvme_tcp 00:18:50.417 rmmod nvme_fabrics 00:18:50.417 rmmod nvme_keyring 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4042114 ']' 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4042114 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 4042114 ']' 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 4042114 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4042114 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4042114' 00:18:50.676 killing process with pid 4042114 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 4042114 00:18:50.676 [2024-05-15 00:55:37.507770] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:50.676 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 4042114 00:18:50.936 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:50.936 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:50.936 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:50.936 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.936 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.936 00:55:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.936 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.936 00:55:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.846 00:55:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:52.846 00:18:52.846 real 0m6.997s 00:18:52.846 user 0m11.751s 00:18:52.846 sys 0m1.941s 00:18:52.846 00:55:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:52.846 00:55:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:52.846 ************************************ 00:18:52.846 END TEST nvmf_multicontroller 00:18:52.846 ************************************ 00:18:52.846 00:55:39 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:18:52.846 00:55:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:52.846 00:55:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:52.846 00:55:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.846 ************************************ 00:18:52.846 START TEST nvmf_aer 00:18:52.846 ************************************ 00:18:52.846 00:55:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:18:53.105 * Looking for test storage... 00:18:53.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.105 00:55:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.540 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:54.541 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:54.541 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:54.541 Found net devices under 0000:08:00.0: cvl_0_0 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:54.541 Found net devices under 0000:08:00.1: cvl_0_1 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.541 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:54.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:18:54.799 00:18:54.799 --- 10.0.0.2 ping statistics --- 00:18:54.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.799 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:18:54.799 00:18:54.799 --- 10.0.0.1 ping statistics --- 00:18:54.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.799 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4043858 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4043858 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 4043858 ']' 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:54.799 00:55:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:54.799 [2024-05-15 00:55:41.724559] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:54.799 [2024-05-15 00:55:41.724649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.799 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.799 [2024-05-15 00:55:41.789159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.057 [2024-05-15 00:55:41.906623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.057 [2024-05-15 00:55:41.906685] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.057 [2024-05-15 00:55:41.906702] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.057 [2024-05-15 00:55:41.906715] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.057 [2024-05-15 00:55:41.906727] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.057 [2024-05-15 00:55:41.906814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.057 [2024-05-15 00:55:41.906866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.057 [2024-05-15 00:55:41.906913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.057 [2024-05-15 00:55:41.906916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.057 [2024-05-15 00:55:42.054627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.057 Malloc0 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.057 [2024-05-15 00:55:42.104722] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:55.057 [2024-05-15 00:55:42.105018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.057 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.315 [ 00:18:55.315 { 00:18:55.315 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:55.315 "subtype": "Discovery", 00:18:55.315 "listen_addresses": [], 00:18:55.315 "allow_any_host": true, 00:18:55.315 "hosts": [] 00:18:55.315 }, 00:18:55.315 { 00:18:55.315 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.315 "subtype": "NVMe", 00:18:55.315 "listen_addresses": [ 00:18:55.315 { 00:18:55.315 "trtype": "TCP", 00:18:55.315 "adrfam": "IPv4", 00:18:55.315 "traddr": "10.0.0.2", 00:18:55.315 "trsvcid": "4420" 00:18:55.315 } 00:18:55.315 ], 00:18:55.315 "allow_any_host": true, 00:18:55.315 "hosts": [], 00:18:55.315 "serial_number": "SPDK00000000000001", 00:18:55.315 "model_number": "SPDK bdev Controller", 00:18:55.315 "max_namespaces": 2, 00:18:55.315 "min_cntlid": 1, 00:18:55.315 "max_cntlid": 65519, 00:18:55.315 "namespaces": [ 00:18:55.316 { 00:18:55.316 "nsid": 1, 00:18:55.316 "bdev_name": "Malloc0", 00:18:55.316 "name": "Malloc0", 00:18:55.316 "nguid": "55354916D81C403197DFA40ED3F5B929", 00:18:55.316 "uuid": "55354916-d81c-4031-97df-a40ed3f5b929" 00:18:55.316 } 00:18:55.316 ] 00:18:55.316 } 00:18:55.316 ] 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4043882 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:18:55.316 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.316 Malloc1 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.316 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.574 [ 00:18:55.574 { 00:18:55.574 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:55.574 "subtype": "Discovery", 00:18:55.574 "listen_addresses": [], 00:18:55.574 "allow_any_host": true, 00:18:55.574 "hosts": [] 00:18:55.574 }, 00:18:55.574 { 00:18:55.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.574 "subtype": "NVMe", 00:18:55.574 "listen_addresses": [ 00:18:55.574 { 00:18:55.574 "trtype": "TCP", 00:18:55.574 "adrfam": "IPv4", 00:18:55.574 "traddr": "10.0.0.2", 00:18:55.574 "trsvcid": "4420" 00:18:55.574 } 00:18:55.574 ], 00:18:55.574 "allow_any_host": true, 00:18:55.574 "hosts": [], 00:18:55.574 "serial_number": "SPDK00000000000001", 00:18:55.574 "model_number": "SPDK bdev Controller", 00:18:55.574 "max_namespaces": 2, 00:18:55.574 "min_cntlid": 1, 00:18:55.574 "max_cntlid": 65519, 00:18:55.574 "namespaces": [ 00:18:55.574 { 00:18:55.574 "nsid": 1, 00:18:55.574 "bdev_name": "Malloc0", 00:18:55.574 "name": "Malloc0", 00:18:55.574 "nguid": "55354916D81C403197DFA40ED3F5B929", 00:18:55.574 "uuid": "55354916-d81c-4031-97df-a40ed3f5b929" 00:18:55.574 }, 00:18:55.574 { 00:18:55.574 "nsid": 2, 00:18:55.574 "bdev_name": "Malloc1", 00:18:55.574 "name": "Malloc1", 00:18:55.574 "nguid": "D2ADC9BEF68A48178C0470BEBF9CBD91", 00:18:55.574 "uuid": "d2adc9be-f68a-4817-8c04-70bebf9cbd91" 00:18:55.574 } 00:18:55.574 ] 00:18:55.574 } 00:18:55.574 ] 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4043882 00:18:55.574 Asynchronous Event Request test 00:18:55.574 Attaching to 10.0.0.2 00:18:55.574 Attached to 10.0.0.2 00:18:55.574 Registering asynchronous event callbacks... 00:18:55.574 Starting namespace attribute notice tests for all controllers... 00:18:55.574 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:55.574 aer_cb - Changed Namespace 00:18:55.574 Cleaning up... 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.574 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:55.575 rmmod nvme_tcp 00:18:55.575 rmmod nvme_fabrics 00:18:55.575 rmmod nvme_keyring 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4043858 ']' 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4043858 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 4043858 ']' 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 4043858 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4043858 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4043858' 00:18:55.575 killing process with pid 4043858 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 4043858 00:18:55.575 [2024-05-15 00:55:42.540016] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:55.575 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 4043858 00:18:55.835 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.835 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:55.835 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:55.835 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.835 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:55.835 00:55:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.835 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.835 00:55:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.743 00:55:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:58.002 00:18:58.002 real 0m4.939s 00:18:58.002 user 0m3.866s 00:18:58.002 sys 0m1.633s 00:18:58.002 00:55:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:58.002 00:55:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:58.002 ************************************ 00:18:58.002 END TEST nvmf_aer 00:18:58.002 ************************************ 00:18:58.002 00:55:44 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:18:58.002 00:55:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:58.002 00:55:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:58.002 00:55:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:58.002 ************************************ 00:18:58.002 START TEST nvmf_async_init 00:18:58.002 ************************************ 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:18:58.002 * Looking for test storage... 00:18:58.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.002 00:55:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=83159ba2c7174760a65e5910c3c80eb8 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:18:58.003 00:55:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:59.912 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:59.912 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:59.912 Found net devices under 0000:08:00.0: cvl_0_0 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:59.912 Found net devices under 0000:08:00.1: cvl_0_1 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.912 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:59.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:18:59.913 00:18:59.913 --- 10.0.0.2 ping statistics --- 00:18:59.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.913 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:59.913 00:18:59.913 --- 10.0.0.1 ping statistics --- 00:18:59.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.913 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4045413 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4045413 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 4045413 ']' 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:59.913 00:55:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:59.913 [2024-05-15 00:55:46.664654] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:18:59.913 [2024-05-15 00:55:46.664747] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.913 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.913 [2024-05-15 00:55:46.744164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.913 [2024-05-15 00:55:46.898006] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.913 [2024-05-15 00:55:46.898083] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.913 [2024-05-15 00:55:46.898113] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.913 [2024-05-15 00:55:46.898138] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.913 [2024-05-15 00:55:46.898161] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.913 [2024-05-15 00:55:46.898220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.172 [2024-05-15 00:55:47.057426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.172 null0 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 83159ba2c7174760a65e5910c3c80eb8 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.172 [2024-05-15 00:55:47.097459] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:00.172 [2024-05-15 00:55:47.097695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.172 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.431 nvme0n1 00:19:00.431 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.431 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:00.431 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.431 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.431 [ 00:19:00.431 { 00:19:00.431 "name": "nvme0n1", 00:19:00.431 "aliases": [ 00:19:00.431 "83159ba2-c717-4760-a65e-5910c3c80eb8" 00:19:00.431 ], 00:19:00.431 "product_name": "NVMe disk", 00:19:00.431 "block_size": 512, 00:19:00.431 "num_blocks": 2097152, 00:19:00.431 "uuid": "83159ba2-c717-4760-a65e-5910c3c80eb8", 00:19:00.431 "assigned_rate_limits": { 00:19:00.431 "rw_ios_per_sec": 0, 00:19:00.431 "rw_mbytes_per_sec": 0, 00:19:00.431 "r_mbytes_per_sec": 0, 00:19:00.431 "w_mbytes_per_sec": 0 00:19:00.431 }, 00:19:00.431 "claimed": false, 00:19:00.431 "zoned": false, 00:19:00.431 "supported_io_types": { 00:19:00.431 "read": true, 00:19:00.431 "write": true, 00:19:00.431 "unmap": false, 00:19:00.431 "write_zeroes": true, 00:19:00.431 "flush": true, 00:19:00.431 "reset": true, 00:19:00.431 "compare": true, 00:19:00.431 "compare_and_write": true, 00:19:00.431 "abort": true, 00:19:00.431 "nvme_admin": true, 00:19:00.431 "nvme_io": true 00:19:00.431 }, 00:19:00.431 "memory_domains": [ 00:19:00.431 { 00:19:00.431 "dma_device_id": "system", 00:19:00.431 "dma_device_type": 1 00:19:00.431 } 00:19:00.431 ], 00:19:00.431 "driver_specific": { 00:19:00.431 "nvme": [ 00:19:00.431 { 00:19:00.431 "trid": { 00:19:00.431 "trtype": "TCP", 00:19:00.431 "adrfam": "IPv4", 00:19:00.431 "traddr": "10.0.0.2", 00:19:00.431 "trsvcid": "4420", 00:19:00.431 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:00.431 }, 00:19:00.431 "ctrlr_data": { 00:19:00.431 "cntlid": 1, 00:19:00.431 "vendor_id": "0x8086", 00:19:00.431 "model_number": "SPDK bdev Controller", 00:19:00.431 "serial_number": "00000000000000000000", 00:19:00.431 "firmware_revision": "24.05", 00:19:00.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:00.431 "oacs": { 00:19:00.431 "security": 0, 00:19:00.431 "format": 0, 00:19:00.431 "firmware": 0, 00:19:00.431 "ns_manage": 0 00:19:00.431 }, 00:19:00.431 "multi_ctrlr": true, 00:19:00.431 "ana_reporting": false 00:19:00.431 }, 00:19:00.431 "vs": { 00:19:00.431 "nvme_version": "1.3" 00:19:00.431 }, 00:19:00.431 "ns_data": { 00:19:00.431 "id": 1, 00:19:00.431 "can_share": true 00:19:00.431 } 00:19:00.431 } 00:19:00.431 ], 00:19:00.431 "mp_policy": "active_passive" 00:19:00.431 } 00:19:00.431 } 00:19:00.431 ] 00:19:00.431 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.431 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:00.431 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.431 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.431 [2024-05-15 00:55:47.350268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:00.431 [2024-05-15 00:55:47.350362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3d2a0 (9): Bad file descriptor 00:19:00.690 [2024-05-15 00:55:47.492089] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:00.690 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.690 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:00.690 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.690 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.690 [ 00:19:00.690 { 00:19:00.690 "name": "nvme0n1", 00:19:00.690 "aliases": [ 00:19:00.690 "83159ba2-c717-4760-a65e-5910c3c80eb8" 00:19:00.690 ], 00:19:00.690 "product_name": "NVMe disk", 00:19:00.690 "block_size": 512, 00:19:00.690 "num_blocks": 2097152, 00:19:00.690 "uuid": "83159ba2-c717-4760-a65e-5910c3c80eb8", 00:19:00.690 "assigned_rate_limits": { 00:19:00.690 "rw_ios_per_sec": 0, 00:19:00.690 "rw_mbytes_per_sec": 0, 00:19:00.690 "r_mbytes_per_sec": 0, 00:19:00.690 "w_mbytes_per_sec": 0 00:19:00.690 }, 00:19:00.691 "claimed": false, 00:19:00.691 "zoned": false, 00:19:00.691 "supported_io_types": { 00:19:00.691 "read": true, 00:19:00.691 "write": true, 00:19:00.691 "unmap": false, 00:19:00.691 "write_zeroes": true, 00:19:00.691 "flush": true, 00:19:00.691 "reset": true, 00:19:00.691 "compare": true, 00:19:00.691 "compare_and_write": true, 00:19:00.691 "abort": true, 00:19:00.691 "nvme_admin": true, 00:19:00.691 "nvme_io": true 00:19:00.691 }, 00:19:00.691 "memory_domains": [ 00:19:00.691 { 00:19:00.691 "dma_device_id": "system", 00:19:00.691 "dma_device_type": 1 00:19:00.691 } 00:19:00.691 ], 00:19:00.691 "driver_specific": { 00:19:00.691 "nvme": [ 00:19:00.691 { 00:19:00.691 "trid": { 00:19:00.691 "trtype": "TCP", 00:19:00.691 "adrfam": "IPv4", 00:19:00.691 "traddr": "10.0.0.2", 00:19:00.691 "trsvcid": "4420", 00:19:00.691 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:00.691 }, 00:19:00.691 "ctrlr_data": { 00:19:00.691 "cntlid": 2, 00:19:00.691 "vendor_id": "0x8086", 00:19:00.691 "model_number": "SPDK bdev Controller", 00:19:00.691 "serial_number": "00000000000000000000", 00:19:00.691 "firmware_revision": "24.05", 00:19:00.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:00.691 "oacs": { 00:19:00.691 "security": 0, 00:19:00.691 "format": 0, 00:19:00.691 "firmware": 0, 00:19:00.691 "ns_manage": 0 00:19:00.691 }, 00:19:00.691 "multi_ctrlr": true, 00:19:00.691 "ana_reporting": false 00:19:00.691 }, 00:19:00.691 "vs": { 00:19:00.691 "nvme_version": "1.3" 00:19:00.691 }, 00:19:00.691 "ns_data": { 00:19:00.691 "id": 1, 00:19:00.691 "can_share": true 00:19:00.691 } 00:19:00.691 } 00:19:00.691 ], 00:19:00.691 "mp_policy": "active_passive" 00:19:00.691 } 00:19:00.691 } 00:19:00.691 ] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.uvOXHmtStQ 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.uvOXHmtStQ 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.691 [2024-05-15 00:55:47.542940] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.691 [2024-05-15 00:55:47.543071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uvOXHmtStQ 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.691 [2024-05-15 00:55:47.550970] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uvOXHmtStQ 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.691 [2024-05-15 00:55:47.558975] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.691 [2024-05-15 00:55:47.559034] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:00.691 nvme0n1 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.691 [ 00:19:00.691 { 00:19:00.691 "name": "nvme0n1", 00:19:00.691 "aliases": [ 00:19:00.691 "83159ba2-c717-4760-a65e-5910c3c80eb8" 00:19:00.691 ], 00:19:00.691 "product_name": "NVMe disk", 00:19:00.691 "block_size": 512, 00:19:00.691 "num_blocks": 2097152, 00:19:00.691 "uuid": "83159ba2-c717-4760-a65e-5910c3c80eb8", 00:19:00.691 "assigned_rate_limits": { 00:19:00.691 "rw_ios_per_sec": 0, 00:19:00.691 "rw_mbytes_per_sec": 0, 00:19:00.691 "r_mbytes_per_sec": 0, 00:19:00.691 "w_mbytes_per_sec": 0 00:19:00.691 }, 00:19:00.691 "claimed": false, 00:19:00.691 "zoned": false, 00:19:00.691 "supported_io_types": { 00:19:00.691 "read": true, 00:19:00.691 "write": true, 00:19:00.691 "unmap": false, 00:19:00.691 "write_zeroes": true, 00:19:00.691 "flush": true, 00:19:00.691 "reset": true, 00:19:00.691 "compare": true, 00:19:00.691 "compare_and_write": true, 00:19:00.691 "abort": true, 00:19:00.691 "nvme_admin": true, 00:19:00.691 "nvme_io": true 00:19:00.691 }, 00:19:00.691 "memory_domains": [ 00:19:00.691 { 00:19:00.691 "dma_device_id": "system", 00:19:00.691 "dma_device_type": 1 00:19:00.691 } 00:19:00.691 ], 00:19:00.691 "driver_specific": { 00:19:00.691 "nvme": [ 00:19:00.691 { 00:19:00.691 "trid": { 00:19:00.691 "trtype": "TCP", 00:19:00.691 "adrfam": "IPv4", 00:19:00.691 "traddr": "10.0.0.2", 00:19:00.691 "trsvcid": "4421", 00:19:00.691 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:00.691 }, 00:19:00.691 "ctrlr_data": { 00:19:00.691 "cntlid": 3, 00:19:00.691 "vendor_id": "0x8086", 00:19:00.691 "model_number": "SPDK bdev Controller", 00:19:00.691 "serial_number": "00000000000000000000", 00:19:00.691 "firmware_revision": "24.05", 00:19:00.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:00.691 "oacs": { 00:19:00.691 "security": 0, 00:19:00.691 "format": 0, 00:19:00.691 "firmware": 0, 00:19:00.691 "ns_manage": 0 00:19:00.691 }, 00:19:00.691 "multi_ctrlr": true, 00:19:00.691 "ana_reporting": false 00:19:00.691 }, 00:19:00.691 "vs": { 00:19:00.691 "nvme_version": "1.3" 00:19:00.691 }, 00:19:00.691 "ns_data": { 00:19:00.691 "id": 1, 00:19:00.691 "can_share": true 00:19:00.691 } 00:19:00.691 } 00:19:00.691 ], 00:19:00.691 "mp_policy": "active_passive" 00:19:00.691 } 00:19:00.691 } 00:19:00.691 ] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.uvOXHmtStQ 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:00.691 rmmod nvme_tcp 00:19:00.691 rmmod nvme_fabrics 00:19:00.691 rmmod nvme_keyring 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4045413 ']' 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4045413 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 4045413 ']' 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 4045413 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4045413 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4045413' 00:19:00.691 killing process with pid 4045413 00:19:00.691 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 4045413 00:19:00.691 [2024-05-15 00:55:47.734627] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:00.692 [2024-05-15 00:55:47.734669] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:00.692 [2024-05-15 00:55:47.734685] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:00.692 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 4045413 00:19:00.952 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:00.952 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:00.952 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:00.952 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:00.952 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:00.952 00:55:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.952 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.952 00:55:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.494 00:55:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:03.494 00:19:03.494 real 0m5.132s 00:19:03.494 user 0m2.111s 00:19:03.494 sys 0m1.537s 00:19:03.494 00:55:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:03.494 00:55:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:03.494 ************************************ 00:19:03.494 END TEST nvmf_async_init 00:19:03.494 ************************************ 00:19:03.494 00:55:50 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:03.494 00:55:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:03.494 00:55:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:03.494 00:55:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:03.494 ************************************ 00:19:03.494 START TEST dma 00:19:03.494 ************************************ 00:19:03.494 00:55:50 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:03.494 * Looking for test storage... 00:19:03.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:03.494 00:55:50 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.494 00:55:50 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.494 00:55:50 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.494 00:55:50 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.494 00:55:50 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.494 00:55:50 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.494 00:55:50 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.494 00:55:50 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:19:03.494 00:55:50 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.494 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.495 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.495 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.495 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.495 00:55:50 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.495 00:55:50 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:03.495 00:55:50 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:19:03.495 00:19:03.495 real 0m0.079s 00:19:03.495 user 0m0.040s 00:19:03.495 sys 0m0.045s 00:19:03.495 00:55:50 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:03.495 00:55:50 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:19:03.495 ************************************ 00:19:03.495 END TEST dma 00:19:03.495 ************************************ 00:19:03.495 00:55:50 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:03.495 00:55:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:03.495 00:55:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:03.495 00:55:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:03.495 ************************************ 00:19:03.495 START TEST nvmf_identify 00:19:03.495 ************************************ 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:03.495 * Looking for test storage... 00:19:03.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.495 00:55:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:19:04.873 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:04.874 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:04.874 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:04.874 Found net devices under 0000:08:00.0: cvl_0_0 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:04.874 Found net devices under 0000:08:00.1: cvl_0_1 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:04.874 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.132 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.132 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.132 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:19:05.132 00:19:05.132 --- 10.0.0.2 ping statistics --- 00:19:05.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.132 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:05.132 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:19:05.132 00:19:05.133 --- 10.0.0.1 ping statistics --- 00:19:05.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.133 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4047044 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4047044 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 4047044 ']' 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:05.133 00:55:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.133 [2024-05-15 00:55:52.045512] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:19:05.133 [2024-05-15 00:55:52.045602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.133 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.133 [2024-05-15 00:55:52.109872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.391 [2024-05-15 00:55:52.228193] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.391 [2024-05-15 00:55:52.228250] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.391 [2024-05-15 00:55:52.228266] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.391 [2024-05-15 00:55:52.228279] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.391 [2024-05-15 00:55:52.228291] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.391 [2024-05-15 00:55:52.228377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.391 [2024-05-15 00:55:52.228428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.391 [2024-05-15 00:55:52.228477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.391 [2024-05-15 00:55:52.228480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.391 [2024-05-15 00:55:52.358534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:05.391 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.392 Malloc0 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.392 [2024-05-15 00:55:52.436759] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:05.392 [2024-05-15 00:55:52.437066] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:05.392 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.653 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.653 [ 00:19:05.653 { 00:19:05.653 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:05.653 "subtype": "Discovery", 00:19:05.653 "listen_addresses": [ 00:19:05.653 { 00:19:05.653 "trtype": "TCP", 00:19:05.653 "adrfam": "IPv4", 00:19:05.653 "traddr": "10.0.0.2", 00:19:05.653 "trsvcid": "4420" 00:19:05.653 } 00:19:05.653 ], 00:19:05.653 "allow_any_host": true, 00:19:05.653 "hosts": [] 00:19:05.653 }, 00:19:05.653 { 00:19:05.653 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.653 "subtype": "NVMe", 00:19:05.653 "listen_addresses": [ 00:19:05.653 { 00:19:05.653 "trtype": "TCP", 00:19:05.653 "adrfam": "IPv4", 00:19:05.653 "traddr": "10.0.0.2", 00:19:05.653 "trsvcid": "4420" 00:19:05.653 } 00:19:05.653 ], 00:19:05.653 "allow_any_host": true, 00:19:05.653 "hosts": [], 00:19:05.653 "serial_number": "SPDK00000000000001", 00:19:05.653 "model_number": "SPDK bdev Controller", 00:19:05.653 "max_namespaces": 32, 00:19:05.653 "min_cntlid": 1, 00:19:05.653 "max_cntlid": 65519, 00:19:05.653 "namespaces": [ 00:19:05.653 { 00:19:05.653 "nsid": 1, 00:19:05.653 "bdev_name": "Malloc0", 00:19:05.653 "name": "Malloc0", 00:19:05.653 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:05.653 "eui64": "ABCDEF0123456789", 00:19:05.653 "uuid": "df985c20-8db5-4486-a3f8-ef09966e66a8" 00:19:05.653 } 00:19:05.653 ] 00:19:05.653 } 00:19:05.653 ] 00:19:05.653 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.653 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:05.653 [2024-05-15 00:55:52.478340] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:19:05.653 [2024-05-15 00:55:52.478392] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047151 ] 00:19:05.653 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.653 [2024-05-15 00:55:52.520882] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:05.653 [2024-05-15 00:55:52.520961] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:05.653 [2024-05-15 00:55:52.520974] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:05.653 [2024-05-15 00:55:52.520991] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:05.653 [2024-05-15 00:55:52.521007] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:05.653 [2024-05-15 00:55:52.521286] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:05.653 [2024-05-15 00:55:52.521339] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x104be10 0 00:19:05.653 [2024-05-15 00:55:52.535943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:05.653 [2024-05-15 00:55:52.535966] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:05.653 [2024-05-15 00:55:52.535982] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:05.653 [2024-05-15 00:55:52.535990] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:05.654 [2024-05-15 00:55:52.536047] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.536062] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.536071] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.654 [2024-05-15 00:55:52.536092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:05.654 [2024-05-15 00:55:52.536121] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.654 [2024-05-15 00:55:52.541950] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.654 [2024-05-15 00:55:52.541971] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.654 [2024-05-15 00:55:52.541978] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.541987] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cbe40) on tqpair=0x104be10 00:19:05.654 [2024-05-15 00:55:52.542008] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:05.654 [2024-05-15 00:55:52.542020] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:05.654 [2024-05-15 00:55:52.542031] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:05.654 [2024-05-15 00:55:52.542053] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.542063] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.542070] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.654 [2024-05-15 00:55:52.542082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.654 [2024-05-15 00:55:52.542108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.654 [2024-05-15 00:55:52.542343] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.654 [2024-05-15 00:55:52.542358] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.654 [2024-05-15 00:55:52.542366] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.542373] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cbe40) on tqpair=0x104be10 00:19:05.654 [2024-05-15 00:55:52.542385] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:05.654 [2024-05-15 00:55:52.542399] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:05.654 [2024-05-15 00:55:52.542412] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.542420] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.542427] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.654 [2024-05-15 00:55:52.542439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.654 [2024-05-15 00:55:52.542461] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.654 [2024-05-15 00:55:52.542640] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.654 [2024-05-15 00:55:52.542656] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.654 [2024-05-15 00:55:52.542663] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.542675] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cbe40) on tqpair=0x104be10 00:19:05.654 [2024-05-15 00:55:52.542688] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:05.654 [2024-05-15 00:55:52.542704] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:05.654 [2024-05-15 00:55:52.542718] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.542726] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.542733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.654 [2024-05-15 00:55:52.542744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.654 [2024-05-15 00:55:52.542766] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.654 [2024-05-15 00:55:52.546946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.654 [2024-05-15 00:55:52.546990] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.654 [2024-05-15 00:55:52.546998] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.547006] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cbe40) on tqpair=0x104be10 00:19:05.654 [2024-05-15 00:55:52.547019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:05.654 [2024-05-15 00:55:52.547039] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.547048] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.547055] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.654 [2024-05-15 00:55:52.547067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.654 [2024-05-15 00:55:52.547090] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.654 [2024-05-15 00:55:52.547270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.654 [2024-05-15 00:55:52.547286] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.654 [2024-05-15 00:55:52.547293] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.547301] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cbe40) on tqpair=0x104be10 00:19:05.654 [2024-05-15 00:55:52.547313] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:05.654 [2024-05-15 00:55:52.547323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:05.654 [2024-05-15 00:55:52.547337] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:05.654 [2024-05-15 00:55:52.547449] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:05.654 [2024-05-15 00:55:52.547458] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:05.654 [2024-05-15 00:55:52.547475] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.547484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.547491] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.654 [2024-05-15 00:55:52.547503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.654 [2024-05-15 00:55:52.547525] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.654 [2024-05-15 00:55:52.547708] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.654 [2024-05-15 00:55:52.547724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.654 [2024-05-15 00:55:52.547731] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.547738] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cbe40) on tqpair=0x104be10 00:19:05.654 [2024-05-15 00:55:52.547749] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:05.654 [2024-05-15 00:55:52.547767] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.547776] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.547783] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.654 [2024-05-15 00:55:52.547794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.654 [2024-05-15 00:55:52.547816] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.654 [2024-05-15 00:55:52.548010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.654 [2024-05-15 00:55:52.548024] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.654 [2024-05-15 00:55:52.548032] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.548039] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cbe40) on tqpair=0x104be10 00:19:05.654 [2024-05-15 00:55:52.548050] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:05.654 [2024-05-15 00:55:52.548059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:05.654 [2024-05-15 00:55:52.548074] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:05.654 [2024-05-15 00:55:52.548090] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:05.654 [2024-05-15 00:55:52.548107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.548116] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.654 [2024-05-15 00:55:52.548128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.654 [2024-05-15 00:55:52.548150] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.654 [2024-05-15 00:55:52.548347] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.654 [2024-05-15 00:55:52.548363] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.654 [2024-05-15 00:55:52.548371] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.548379] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x104be10): datao=0, datal=4096, cccid=0 00:19:05.654 [2024-05-15 00:55:52.548388] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10cbe40) on tqpair(0x104be10): expected_datao=0, payload_size=4096 00:19:05.654 [2024-05-15 00:55:52.548396] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.548446] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.548457] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.548583] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.654 [2024-05-15 00:55:52.548598] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.654 [2024-05-15 00:55:52.548605] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.654 [2024-05-15 00:55:52.548613] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cbe40) on tqpair=0x104be10 00:19:05.654 [2024-05-15 00:55:52.548632] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:05.654 [2024-05-15 00:55:52.548643] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:05.654 [2024-05-15 00:55:52.548653] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:05.655 [2024-05-15 00:55:52.548662] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:05.655 [2024-05-15 00:55:52.548671] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:05.655 [2024-05-15 00:55:52.548680] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:05.655 [2024-05-15 00:55:52.548702] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:05.655 [2024-05-15 00:55:52.548719] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.548728] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.548736] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.548748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.655 [2024-05-15 00:55:52.548770] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.655 [2024-05-15 00:55:52.548952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.655 [2024-05-15 00:55:52.548968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.655 [2024-05-15 00:55:52.548975] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.548983] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cbe40) on tqpair=0x104be10 00:19:05.655 [2024-05-15 00:55:52.548999] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549007] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549014] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.549025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.655 [2024-05-15 00:55:52.549036] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549043] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549050] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.549060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.655 [2024-05-15 00:55:52.549071] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549078] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549085] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.549095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.655 [2024-05-15 00:55:52.549106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549113] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549120] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.549130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.655 [2024-05-15 00:55:52.549140] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:05.655 [2024-05-15 00:55:52.549165] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:05.655 [2024-05-15 00:55:52.549179] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549187] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.549198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.655 [2024-05-15 00:55:52.549222] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbe40, cid 0, qid 0 00:19:05.655 [2024-05-15 00:55:52.549234] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cbfa0, cid 1, qid 0 00:19:05.655 [2024-05-15 00:55:52.549242] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc100, cid 2, qid 0 00:19:05.655 [2024-05-15 00:55:52.549251] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.655 [2024-05-15 00:55:52.549260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc3c0, cid 4, qid 0 00:19:05.655 [2024-05-15 00:55:52.549500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.655 [2024-05-15 00:55:52.549513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.655 [2024-05-15 00:55:52.549520] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc3c0) on tqpair=0x104be10 00:19:05.655 [2024-05-15 00:55:52.549539] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:05.655 [2024-05-15 00:55:52.549549] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:05.655 [2024-05-15 00:55:52.549568] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549577] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.549589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.655 [2024-05-15 00:55:52.549612] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc3c0, cid 4, qid 0 00:19:05.655 [2024-05-15 00:55:52.549804] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.655 [2024-05-15 00:55:52.549820] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.655 [2024-05-15 00:55:52.549827] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549834] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x104be10): datao=0, datal=4096, cccid=4 00:19:05.655 [2024-05-15 00:55:52.549843] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10cc3c0) on tqpair(0x104be10): expected_datao=0, payload_size=4096 00:19:05.655 [2024-05-15 00:55:52.549851] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549862] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549870] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549956] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.655 [2024-05-15 00:55:52.549970] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.655 [2024-05-15 00:55:52.549977] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.549984] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc3c0) on tqpair=0x104be10 00:19:05.655 [2024-05-15 00:55:52.550008] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:05.655 [2024-05-15 00:55:52.550051] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.550066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.550079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.655 [2024-05-15 00:55:52.550092] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.550100] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.550107] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.550117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.655 [2024-05-15 00:55:52.550146] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc3c0, cid 4, qid 0 00:19:05.655 [2024-05-15 00:55:52.550158] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc520, cid 5, qid 0 00:19:05.655 [2024-05-15 00:55:52.550402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.655 [2024-05-15 00:55:52.550415] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.655 [2024-05-15 00:55:52.550422] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.550429] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x104be10): datao=0, datal=1024, cccid=4 00:19:05.655 [2024-05-15 00:55:52.550438] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10cc3c0) on tqpair(0x104be10): expected_datao=0, payload_size=1024 00:19:05.655 [2024-05-15 00:55:52.550446] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.550457] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.550464] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.550474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.655 [2024-05-15 00:55:52.550484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.655 [2024-05-15 00:55:52.550491] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.550498] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc520) on tqpair=0x104be10 00:19:05.655 [2024-05-15 00:55:52.594945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.655 [2024-05-15 00:55:52.594965] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.655 [2024-05-15 00:55:52.594973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.594980] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc3c0) on tqpair=0x104be10 00:19:05.655 [2024-05-15 00:55:52.595001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.595011] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x104be10) 00:19:05.655 [2024-05-15 00:55:52.595023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.655 [2024-05-15 00:55:52.595055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc3c0, cid 4, qid 0 00:19:05.655 [2024-05-15 00:55:52.595240] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.655 [2024-05-15 00:55:52.595255] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.655 [2024-05-15 00:55:52.595263] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.595270] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x104be10): datao=0, datal=3072, cccid=4 00:19:05.655 [2024-05-15 00:55:52.595279] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10cc3c0) on tqpair(0x104be10): expected_datao=0, payload_size=3072 00:19:05.655 [2024-05-15 00:55:52.595287] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.595320] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.595330] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.595460] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.655 [2024-05-15 00:55:52.595475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.655 [2024-05-15 00:55:52.595483] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.595490] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc3c0) on tqpair=0x104be10 00:19:05.655 [2024-05-15 00:55:52.595509] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.655 [2024-05-15 00:55:52.595518] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x104be10) 00:19:05.656 [2024-05-15 00:55:52.595529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.656 [2024-05-15 00:55:52.595559] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc3c0, cid 4, qid 0 00:19:05.656 [2024-05-15 00:55:52.595741] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.656 [2024-05-15 00:55:52.595757] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.656 [2024-05-15 00:55:52.595764] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.656 [2024-05-15 00:55:52.595771] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x104be10): datao=0, datal=8, cccid=4 00:19:05.656 [2024-05-15 00:55:52.595780] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10cc3c0) on tqpair(0x104be10): expected_datao=0, payload_size=8 00:19:05.656 [2024-05-15 00:55:52.595788] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.656 [2024-05-15 00:55:52.595799] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.656 [2024-05-15 00:55:52.595807] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.656 [2024-05-15 00:55:52.636132] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.656 [2024-05-15 00:55:52.636152] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.656 [2024-05-15 00:55:52.636159] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.656 [2024-05-15 00:55:52.636167] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc3c0) on tqpair=0x104be10 00:19:05.656 ===================================================== 00:19:05.656 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:05.656 ===================================================== 00:19:05.656 Controller Capabilities/Features 00:19:05.656 ================================ 00:19:05.656 Vendor ID: 0000 00:19:05.656 Subsystem Vendor ID: 0000 00:19:05.656 Serial Number: .................... 00:19:05.656 Model Number: ........................................ 00:19:05.656 Firmware Version: 24.05 00:19:05.656 Recommended Arb Burst: 0 00:19:05.656 IEEE OUI Identifier: 00 00 00 00:19:05.656 Multi-path I/O 00:19:05.656 May have multiple subsystem ports: No 00:19:05.656 May have multiple controllers: No 00:19:05.656 Associated with SR-IOV VF: No 00:19:05.656 Max Data Transfer Size: 131072 00:19:05.656 Max Number of Namespaces: 0 00:19:05.656 Max Number of I/O Queues: 1024 00:19:05.656 NVMe Specification Version (VS): 1.3 00:19:05.656 NVMe Specification Version (Identify): 1.3 00:19:05.656 Maximum Queue Entries: 128 00:19:05.656 Contiguous Queues Required: Yes 00:19:05.656 Arbitration Mechanisms Supported 00:19:05.656 Weighted Round Robin: Not Supported 00:19:05.656 Vendor Specific: Not Supported 00:19:05.656 Reset Timeout: 15000 ms 00:19:05.656 Doorbell Stride: 4 bytes 00:19:05.656 NVM Subsystem Reset: Not Supported 00:19:05.656 Command Sets Supported 00:19:05.656 NVM Command Set: Supported 00:19:05.656 Boot Partition: Not Supported 00:19:05.656 Memory Page Size Minimum: 4096 bytes 00:19:05.656 Memory Page Size Maximum: 4096 bytes 00:19:05.656 Persistent Memory Region: Not Supported 00:19:05.656 Optional Asynchronous Events Supported 00:19:05.656 Namespace Attribute Notices: Not Supported 00:19:05.656 Firmware Activation Notices: Not Supported 00:19:05.656 ANA Change Notices: Not Supported 00:19:05.656 PLE Aggregate Log Change Notices: Not Supported 00:19:05.656 LBA Status Info Alert Notices: Not Supported 00:19:05.656 EGE Aggregate Log Change Notices: Not Supported 00:19:05.656 Normal NVM Subsystem Shutdown event: Not Supported 00:19:05.656 Zone Descriptor Change Notices: Not Supported 00:19:05.656 Discovery Log Change Notices: Supported 00:19:05.656 Controller Attributes 00:19:05.656 128-bit Host Identifier: Not Supported 00:19:05.656 Non-Operational Permissive Mode: Not Supported 00:19:05.656 NVM Sets: Not Supported 00:19:05.656 Read Recovery Levels: Not Supported 00:19:05.656 Endurance Groups: Not Supported 00:19:05.656 Predictable Latency Mode: Not Supported 00:19:05.656 Traffic Based Keep ALive: Not Supported 00:19:05.656 Namespace Granularity: Not Supported 00:19:05.656 SQ Associations: Not Supported 00:19:05.656 UUID List: Not Supported 00:19:05.656 Multi-Domain Subsystem: Not Supported 00:19:05.656 Fixed Capacity Management: Not Supported 00:19:05.656 Variable Capacity Management: Not Supported 00:19:05.656 Delete Endurance Group: Not Supported 00:19:05.656 Delete NVM Set: Not Supported 00:19:05.656 Extended LBA Formats Supported: Not Supported 00:19:05.656 Flexible Data Placement Supported: Not Supported 00:19:05.656 00:19:05.656 Controller Memory Buffer Support 00:19:05.656 ================================ 00:19:05.656 Supported: No 00:19:05.656 00:19:05.656 Persistent Memory Region Support 00:19:05.656 ================================ 00:19:05.656 Supported: No 00:19:05.656 00:19:05.656 Admin Command Set Attributes 00:19:05.656 ============================ 00:19:05.656 Security Send/Receive: Not Supported 00:19:05.656 Format NVM: Not Supported 00:19:05.656 Firmware Activate/Download: Not Supported 00:19:05.656 Namespace Management: Not Supported 00:19:05.656 Device Self-Test: Not Supported 00:19:05.656 Directives: Not Supported 00:19:05.656 NVMe-MI: Not Supported 00:19:05.656 Virtualization Management: Not Supported 00:19:05.656 Doorbell Buffer Config: Not Supported 00:19:05.656 Get LBA Status Capability: Not Supported 00:19:05.656 Command & Feature Lockdown Capability: Not Supported 00:19:05.656 Abort Command Limit: 1 00:19:05.656 Async Event Request Limit: 4 00:19:05.656 Number of Firmware Slots: N/A 00:19:05.656 Firmware Slot 1 Read-Only: N/A 00:19:05.656 Firmware Activation Without Reset: N/A 00:19:05.656 Multiple Update Detection Support: N/A 00:19:05.656 Firmware Update Granularity: No Information Provided 00:19:05.656 Per-Namespace SMART Log: No 00:19:05.656 Asymmetric Namespace Access Log Page: Not Supported 00:19:05.656 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:05.656 Command Effects Log Page: Not Supported 00:19:05.656 Get Log Page Extended Data: Supported 00:19:05.656 Telemetry Log Pages: Not Supported 00:19:05.656 Persistent Event Log Pages: Not Supported 00:19:05.656 Supported Log Pages Log Page: May Support 00:19:05.656 Commands Supported & Effects Log Page: Not Supported 00:19:05.656 Feature Identifiers & Effects Log Page:May Support 00:19:05.656 NVMe-MI Commands & Effects Log Page: May Support 00:19:05.656 Data Area 4 for Telemetry Log: Not Supported 00:19:05.656 Error Log Page Entries Supported: 128 00:19:05.656 Keep Alive: Not Supported 00:19:05.656 00:19:05.656 NVM Command Set Attributes 00:19:05.656 ========================== 00:19:05.656 Submission Queue Entry Size 00:19:05.656 Max: 1 00:19:05.656 Min: 1 00:19:05.656 Completion Queue Entry Size 00:19:05.656 Max: 1 00:19:05.656 Min: 1 00:19:05.656 Number of Namespaces: 0 00:19:05.656 Compare Command: Not Supported 00:19:05.656 Write Uncorrectable Command: Not Supported 00:19:05.656 Dataset Management Command: Not Supported 00:19:05.656 Write Zeroes Command: Not Supported 00:19:05.656 Set Features Save Field: Not Supported 00:19:05.656 Reservations: Not Supported 00:19:05.656 Timestamp: Not Supported 00:19:05.656 Copy: Not Supported 00:19:05.656 Volatile Write Cache: Not Present 00:19:05.656 Atomic Write Unit (Normal): 1 00:19:05.656 Atomic Write Unit (PFail): 1 00:19:05.656 Atomic Compare & Write Unit: 1 00:19:05.656 Fused Compare & Write: Supported 00:19:05.656 Scatter-Gather List 00:19:05.656 SGL Command Set: Supported 00:19:05.656 SGL Keyed: Supported 00:19:05.656 SGL Bit Bucket Descriptor: Not Supported 00:19:05.656 SGL Metadata Pointer: Not Supported 00:19:05.656 Oversized SGL: Not Supported 00:19:05.656 SGL Metadata Address: Not Supported 00:19:05.656 SGL Offset: Supported 00:19:05.656 Transport SGL Data Block: Not Supported 00:19:05.656 Replay Protected Memory Block: Not Supported 00:19:05.656 00:19:05.656 Firmware Slot Information 00:19:05.656 ========================= 00:19:05.656 Active slot: 0 00:19:05.656 00:19:05.656 00:19:05.656 Error Log 00:19:05.656 ========= 00:19:05.656 00:19:05.656 Active Namespaces 00:19:05.656 ================= 00:19:05.656 Discovery Log Page 00:19:05.656 ================== 00:19:05.656 Generation Counter: 2 00:19:05.656 Number of Records: 2 00:19:05.656 Record Format: 0 00:19:05.656 00:19:05.656 Discovery Log Entry 0 00:19:05.656 ---------------------- 00:19:05.656 Transport Type: 3 (TCP) 00:19:05.656 Address Family: 1 (IPv4) 00:19:05.656 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:05.656 Entry Flags: 00:19:05.656 Duplicate Returned Information: 1 00:19:05.656 Explicit Persistent Connection Support for Discovery: 1 00:19:05.656 Transport Requirements: 00:19:05.656 Secure Channel: Not Required 00:19:05.656 Port ID: 0 (0x0000) 00:19:05.656 Controller ID: 65535 (0xffff) 00:19:05.656 Admin Max SQ Size: 128 00:19:05.656 Transport Service Identifier: 4420 00:19:05.656 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:05.657 Transport Address: 10.0.0.2 00:19:05.657 Discovery Log Entry 1 00:19:05.657 ---------------------- 00:19:05.657 Transport Type: 3 (TCP) 00:19:05.657 Address Family: 1 (IPv4) 00:19:05.657 Subsystem Type: 2 (NVM Subsystem) 00:19:05.657 Entry Flags: 00:19:05.657 Duplicate Returned Information: 0 00:19:05.657 Explicit Persistent Connection Support for Discovery: 0 00:19:05.657 Transport Requirements: 00:19:05.657 Secure Channel: Not Required 00:19:05.657 Port ID: 0 (0x0000) 00:19:05.657 Controller ID: 65535 (0xffff) 00:19:05.657 Admin Max SQ Size: 128 00:19:05.657 Transport Service Identifier: 4420 00:19:05.657 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:05.657 Transport Address: 10.0.0.2 [2024-05-15 00:55:52.636294] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:05.657 [2024-05-15 00:55:52.636321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.657 [2024-05-15 00:55:52.636335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.657 [2024-05-15 00:55:52.636346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.657 [2024-05-15 00:55:52.636356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.657 [2024-05-15 00:55:52.636371] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.636380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.636388] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.636400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.636426] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.636618] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.636631] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.636639] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.636646] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.657 [2024-05-15 00:55:52.636660] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.636673] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.636680] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.636692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.636720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.636938] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.636954] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.636961] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.636969] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.657 [2024-05-15 00:55:52.636989] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:05.657 [2024-05-15 00:55:52.636999] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:05.657 [2024-05-15 00:55:52.637016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637026] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.637045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.637066] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.637234] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.637250] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.637257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637264] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.657 [2024-05-15 00:55:52.637285] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637296] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637303] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.637315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.637337] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.637531] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.637543] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.637551] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637558] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.657 [2024-05-15 00:55:52.637577] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637594] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.637605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.637626] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.637817] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.637833] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.637840] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637852] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.657 [2024-05-15 00:55:52.637872] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637881] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.637889] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.637900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.637921] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.638112] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.638128] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.638136] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.638143] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.657 [2024-05-15 00:55:52.638162] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.638172] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.638179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.638191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.638214] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.638379] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.638395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.638403] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.638412] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.657 [2024-05-15 00:55:52.638431] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.638441] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.638448] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.638460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.638481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.638673] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.638687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.638695] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.638703] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.657 [2024-05-15 00:55:52.638723] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.638732] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.638740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.638751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.638772] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.642944] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.642962] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.642970] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.642977] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.657 [2024-05-15 00:55:52.643012] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.643023] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.657 [2024-05-15 00:55:52.643030] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x104be10) 00:19:05.657 [2024-05-15 00:55:52.643042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.657 [2024-05-15 00:55:52.643065] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10cc260, cid 3, qid 0 00:19:05.657 [2024-05-15 00:55:52.643245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.657 [2024-05-15 00:55:52.643261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.657 [2024-05-15 00:55:52.643269] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.658 [2024-05-15 00:55:52.643276] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10cc260) on tqpair=0x104be10 00:19:05.658 [2024-05-15 00:55:52.643293] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:19:05.658 00:19:05.658 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:05.658 [2024-05-15 00:55:52.678966] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:19:05.658 [2024-05-15 00:55:52.679013] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047159 ] 00:19:05.658 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.921 [2024-05-15 00:55:52.722384] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:05.921 [2024-05-15 00:55:52.722449] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:05.921 [2024-05-15 00:55:52.722460] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:05.921 [2024-05-15 00:55:52.722478] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:05.921 [2024-05-15 00:55:52.722491] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:05.921 [2024-05-15 00:55:52.722715] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:05.921 [2024-05-15 00:55:52.722754] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa98e10 0 00:19:05.921 [2024-05-15 00:55:52.736946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:05.921 [2024-05-15 00:55:52.736965] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:05.921 [2024-05-15 00:55:52.736980] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:05.921 [2024-05-15 00:55:52.736988] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:05.921 [2024-05-15 00:55:52.737035] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.737047] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.737054] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.921 [2024-05-15 00:55:52.737071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:05.921 [2024-05-15 00:55:52.737098] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.921 [2024-05-15 00:55:52.744962] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.921 [2024-05-15 00:55:52.744985] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.921 [2024-05-15 00:55:52.744994] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745002] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb18e40) on tqpair=0xa98e10 00:19:05.921 [2024-05-15 00:55:52.745018] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:05.921 [2024-05-15 00:55:52.745029] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:05.921 [2024-05-15 00:55:52.745039] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:05.921 [2024-05-15 00:55:52.745058] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745068] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745075] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.921 [2024-05-15 00:55:52.745088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.921 [2024-05-15 00:55:52.745113] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.921 [2024-05-15 00:55:52.745297] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.921 [2024-05-15 00:55:52.745313] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.921 [2024-05-15 00:55:52.745321] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745328] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb18e40) on tqpair=0xa98e10 00:19:05.921 [2024-05-15 00:55:52.745339] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:05.921 [2024-05-15 00:55:52.745353] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:05.921 [2024-05-15 00:55:52.745366] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745374] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745382] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.921 [2024-05-15 00:55:52.745393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.921 [2024-05-15 00:55:52.745416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.921 [2024-05-15 00:55:52.745591] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.921 [2024-05-15 00:55:52.745604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.921 [2024-05-15 00:55:52.745612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745619] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb18e40) on tqpair=0xa98e10 00:19:05.921 [2024-05-15 00:55:52.745628] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:05.921 [2024-05-15 00:55:52.745643] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:05.921 [2024-05-15 00:55:52.745657] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745672] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.921 [2024-05-15 00:55:52.745684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.921 [2024-05-15 00:55:52.745706] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.921 [2024-05-15 00:55:52.745887] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.921 [2024-05-15 00:55:52.745903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.921 [2024-05-15 00:55:52.745916] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb18e40) on tqpair=0xa98e10 00:19:05.921 [2024-05-15 00:55:52.745940] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:05.921 [2024-05-15 00:55:52.745960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745970] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.745977] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.921 [2024-05-15 00:55:52.745989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.921 [2024-05-15 00:55:52.746011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.921 [2024-05-15 00:55:52.746148] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.921 [2024-05-15 00:55:52.746165] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.921 [2024-05-15 00:55:52.746172] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.746180] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb18e40) on tqpair=0xa98e10 00:19:05.921 [2024-05-15 00:55:52.746188] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:05.921 [2024-05-15 00:55:52.746198] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:05.921 [2024-05-15 00:55:52.746212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:05.921 [2024-05-15 00:55:52.746322] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:05.921 [2024-05-15 00:55:52.746330] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:05.921 [2024-05-15 00:55:52.746344] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.746352] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.746359] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.921 [2024-05-15 00:55:52.746371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.921 [2024-05-15 00:55:52.746394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.921 [2024-05-15 00:55:52.746571] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.921 [2024-05-15 00:55:52.746588] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.921 [2024-05-15 00:55:52.746595] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.746602] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb18e40) on tqpair=0xa98e10 00:19:05.921 [2024-05-15 00:55:52.746611] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:05.921 [2024-05-15 00:55:52.746629] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.746639] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.746646] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.921 [2024-05-15 00:55:52.746658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.921 [2024-05-15 00:55:52.746680] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.921 [2024-05-15 00:55:52.746854] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.921 [2024-05-15 00:55:52.746872] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.921 [2024-05-15 00:55:52.746880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.746888] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb18e40) on tqpair=0xa98e10 00:19:05.921 [2024-05-15 00:55:52.746897] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:05.921 [2024-05-15 00:55:52.746906] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:05.921 [2024-05-15 00:55:52.746920] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:05.921 [2024-05-15 00:55:52.746941] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:05.921 [2024-05-15 00:55:52.746957] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.746966] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.921 [2024-05-15 00:55:52.746978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.921 [2024-05-15 00:55:52.747001] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.921 [2024-05-15 00:55:52.747189] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.921 [2024-05-15 00:55:52.747205] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.921 [2024-05-15 00:55:52.747213] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.921 [2024-05-15 00:55:52.747220] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa98e10): datao=0, datal=4096, cccid=0 00:19:05.922 [2024-05-15 00:55:52.747229] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb18e40) on tqpair(0xa98e10): expected_datao=0, payload_size=4096 00:19:05.922 [2024-05-15 00:55:52.747237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.747257] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.747267] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.791953] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.922 [2024-05-15 00:55:52.791973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.922 [2024-05-15 00:55:52.791981] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.791989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb18e40) on tqpair=0xa98e10 00:19:05.922 [2024-05-15 00:55:52.792002] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:05.922 [2024-05-15 00:55:52.792011] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:05.922 [2024-05-15 00:55:52.792020] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:05.922 [2024-05-15 00:55:52.792027] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:05.922 [2024-05-15 00:55:52.792036] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:05.922 [2024-05-15 00:55:52.792045] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.792067] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.792084] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792093] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.922 [2024-05-15 00:55:52.792117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.922 [2024-05-15 00:55:52.792143] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.922 [2024-05-15 00:55:52.792323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.922 [2024-05-15 00:55:52.792340] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.922 [2024-05-15 00:55:52.792347] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792355] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb18e40) on tqpair=0xa98e10 00:19:05.922 [2024-05-15 00:55:52.792367] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792375] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792383] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa98e10) 00:19:05.922 [2024-05-15 00:55:52.792394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.922 [2024-05-15 00:55:52.792405] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792413] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792420] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa98e10) 00:19:05.922 [2024-05-15 00:55:52.792430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.922 [2024-05-15 00:55:52.792441] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792448] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792456] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa98e10) 00:19:05.922 [2024-05-15 00:55:52.792465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.922 [2024-05-15 00:55:52.792476] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792491] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.922 [2024-05-15 00:55:52.792501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.922 [2024-05-15 00:55:52.792511] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.792531] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.792545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792553] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa98e10) 00:19:05.922 [2024-05-15 00:55:52.792564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.922 [2024-05-15 00:55:52.792589] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18e40, cid 0, qid 0 00:19:05.922 [2024-05-15 00:55:52.792601] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb18fa0, cid 1, qid 0 00:19:05.922 [2024-05-15 00:55:52.792609] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19100, cid 2, qid 0 00:19:05.922 [2024-05-15 00:55:52.792618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.922 [2024-05-15 00:55:52.792627] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb193c0, cid 4, qid 0 00:19:05.922 [2024-05-15 00:55:52.792798] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.922 [2024-05-15 00:55:52.792814] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.922 [2024-05-15 00:55:52.792826] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792834] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb193c0) on tqpair=0xa98e10 00:19:05.922 [2024-05-15 00:55:52.792843] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:05.922 [2024-05-15 00:55:52.792853] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.792868] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.792886] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.792899] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792907] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.792914] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa98e10) 00:19:05.922 [2024-05-15 00:55:52.792926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.922 [2024-05-15 00:55:52.792960] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb193c0, cid 4, qid 0 00:19:05.922 [2024-05-15 00:55:52.793136] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.922 [2024-05-15 00:55:52.793150] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.922 [2024-05-15 00:55:52.793157] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793165] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb193c0) on tqpair=0xa98e10 00:19:05.922 [2024-05-15 00:55:52.793229] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.793252] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.793268] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793276] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa98e10) 00:19:05.922 [2024-05-15 00:55:52.793288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.922 [2024-05-15 00:55:52.793311] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb193c0, cid 4, qid 0 00:19:05.922 [2024-05-15 00:55:52.793464] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.922 [2024-05-15 00:55:52.793477] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.922 [2024-05-15 00:55:52.793485] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793492] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa98e10): datao=0, datal=4096, cccid=4 00:19:05.922 [2024-05-15 00:55:52.793501] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb193c0) on tqpair(0xa98e10): expected_datao=0, payload_size=4096 00:19:05.922 [2024-05-15 00:55:52.793509] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793521] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793529] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793551] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.922 [2024-05-15 00:55:52.793563] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.922 [2024-05-15 00:55:52.793571] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793578] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb193c0) on tqpair=0xa98e10 00:19:05.922 [2024-05-15 00:55:52.793604] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:05.922 [2024-05-15 00:55:52.793630] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.793650] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:05.922 [2024-05-15 00:55:52.793665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793673] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa98e10) 00:19:05.922 [2024-05-15 00:55:52.793685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.922 [2024-05-15 00:55:52.793708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb193c0, cid 4, qid 0 00:19:05.922 [2024-05-15 00:55:52.793865] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.922 [2024-05-15 00:55:52.793879] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.922 [2024-05-15 00:55:52.793887] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793894] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa98e10): datao=0, datal=4096, cccid=4 00:19:05.922 [2024-05-15 00:55:52.793902] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb193c0) on tqpair(0xa98e10): expected_datao=0, payload_size=4096 00:19:05.922 [2024-05-15 00:55:52.793910] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793922] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793930] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.922 [2024-05-15 00:55:52.793957] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.922 [2024-05-15 00:55:52.793969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.923 [2024-05-15 00:55:52.793977] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.793985] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb193c0) on tqpair=0xa98e10 00:19:05.923 [2024-05-15 00:55:52.794004] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:05.923 [2024-05-15 00:55:52.794023] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:05.923 [2024-05-15 00:55:52.794038] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794046] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.794058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.923 [2024-05-15 00:55:52.794081] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb193c0, cid 4, qid 0 00:19:05.923 [2024-05-15 00:55:52.794238] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.923 [2024-05-15 00:55:52.794254] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.923 [2024-05-15 00:55:52.794262] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794269] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa98e10): datao=0, datal=4096, cccid=4 00:19:05.923 [2024-05-15 00:55:52.794278] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb193c0) on tqpair(0xa98e10): expected_datao=0, payload_size=4096 00:19:05.923 [2024-05-15 00:55:52.794286] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794297] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794305] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794319] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.923 [2024-05-15 00:55:52.794334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.923 [2024-05-15 00:55:52.794342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794349] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb193c0) on tqpair=0xa98e10 00:19:05.923 [2024-05-15 00:55:52.794369] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:05.923 [2024-05-15 00:55:52.794393] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:05.923 [2024-05-15 00:55:52.794409] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:05.923 [2024-05-15 00:55:52.794420] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:05.923 [2024-05-15 00:55:52.794430] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:05.923 [2024-05-15 00:55:52.794439] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:05.923 [2024-05-15 00:55:52.794448] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:05.923 [2024-05-15 00:55:52.794458] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:05.923 [2024-05-15 00:55:52.794483] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794493] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.794505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.923 [2024-05-15 00:55:52.794517] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794525] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794532] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.794543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.923 [2024-05-15 00:55:52.794570] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb193c0, cid 4, qid 0 00:19:05.923 [2024-05-15 00:55:52.794582] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19520, cid 5, qid 0 00:19:05.923 [2024-05-15 00:55:52.794731] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.923 [2024-05-15 00:55:52.794747] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.923 [2024-05-15 00:55:52.794755] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794763] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb193c0) on tqpair=0xa98e10 00:19:05.923 [2024-05-15 00:55:52.794776] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.923 [2024-05-15 00:55:52.794787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.923 [2024-05-15 00:55:52.794794] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794801] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19520) on tqpair=0xa98e10 00:19:05.923 [2024-05-15 00:55:52.794818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.794828] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.794839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.923 [2024-05-15 00:55:52.794861] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19520, cid 5, qid 0 00:19:05.923 [2024-05-15 00:55:52.795001] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.923 [2024-05-15 00:55:52.795021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.923 [2024-05-15 00:55:52.795029] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.795037] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19520) on tqpair=0xa98e10 00:19:05.923 [2024-05-15 00:55:52.795054] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.795063] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.795075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.923 [2024-05-15 00:55:52.795097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19520, cid 5, qid 0 00:19:05.923 [2024-05-15 00:55:52.795235] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.923 [2024-05-15 00:55:52.795250] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.923 [2024-05-15 00:55:52.795258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.795265] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19520) on tqpair=0xa98e10 00:19:05.923 [2024-05-15 00:55:52.795283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.795292] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.795304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.923 [2024-05-15 00:55:52.795326] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19520, cid 5, qid 0 00:19:05.923 [2024-05-15 00:55:52.795470] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.923 [2024-05-15 00:55:52.795484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.923 [2024-05-15 00:55:52.795492] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.795499] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19520) on tqpair=0xa98e10 00:19:05.923 [2024-05-15 00:55:52.795520] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.795530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.795542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.923 [2024-05-15 00:55:52.795556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.795564] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.795575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.923 [2024-05-15 00:55:52.795588] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.795596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.795606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.923 [2024-05-15 00:55:52.795625] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.795634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa98e10) 00:19:05.923 [2024-05-15 00:55:52.795645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.923 [2024-05-15 00:55:52.795668] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19520, cid 5, qid 0 00:19:05.923 [2024-05-15 00:55:52.795680] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb193c0, cid 4, qid 0 00:19:05.923 [2024-05-15 00:55:52.795693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19680, cid 6, qid 0 00:19:05.923 [2024-05-15 00:55:52.795702] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb197e0, cid 7, qid 0 00:19:05.923 [2024-05-15 00:55:52.795911] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.923 [2024-05-15 00:55:52.795924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.923 [2024-05-15 00:55:52.799945] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.799956] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa98e10): datao=0, datal=8192, cccid=5 00:19:05.923 [2024-05-15 00:55:52.799965] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb19520) on tqpair(0xa98e10): expected_datao=0, payload_size=8192 00:19:05.923 [2024-05-15 00:55:52.799974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.799995] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.800005] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.800019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.923 [2024-05-15 00:55:52.800030] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.923 [2024-05-15 00:55:52.800038] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.800045] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa98e10): datao=0, datal=512, cccid=4 00:19:05.923 [2024-05-15 00:55:52.800054] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb193c0) on tqpair(0xa98e10): expected_datao=0, payload_size=512 00:19:05.923 [2024-05-15 00:55:52.800062] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.800073] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.800081] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.923 [2024-05-15 00:55:52.800090] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.923 [2024-05-15 00:55:52.800100] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.924 [2024-05-15 00:55:52.800108] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800115] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa98e10): datao=0, datal=512, cccid=6 00:19:05.924 [2024-05-15 00:55:52.800123] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb19680) on tqpair(0xa98e10): expected_datao=0, payload_size=512 00:19:05.924 [2024-05-15 00:55:52.800132] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800142] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800150] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800159] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:05.924 [2024-05-15 00:55:52.800169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:05.924 [2024-05-15 00:55:52.800177] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800184] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa98e10): datao=0, datal=4096, cccid=7 00:19:05.924 [2024-05-15 00:55:52.800192] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb197e0) on tqpair(0xa98e10): expected_datao=0, payload_size=4096 00:19:05.924 [2024-05-15 00:55:52.800200] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800211] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800219] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.924 [2024-05-15 00:55:52.800239] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.924 [2024-05-15 00:55:52.800246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800254] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19520) on tqpair=0xa98e10 00:19:05.924 [2024-05-15 00:55:52.800279] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.924 [2024-05-15 00:55:52.800292] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.924 [2024-05-15 00:55:52.800299] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800307] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb193c0) on tqpair=0xa98e10 00:19:05.924 [2024-05-15 00:55:52.800323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.924 [2024-05-15 00:55:52.800335] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.924 [2024-05-15 00:55:52.800342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800349] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19680) on tqpair=0xa98e10 00:19:05.924 [2024-05-15 00:55:52.800365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.924 [2024-05-15 00:55:52.800376] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.924 [2024-05-15 00:55:52.800383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.924 [2024-05-15 00:55:52.800391] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb197e0) on tqpair=0xa98e10 00:19:05.924 ===================================================== 00:19:05.924 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:05.924 ===================================================== 00:19:05.924 Controller Capabilities/Features 00:19:05.924 ================================ 00:19:05.924 Vendor ID: 8086 00:19:05.924 Subsystem Vendor ID: 8086 00:19:05.924 Serial Number: SPDK00000000000001 00:19:05.924 Model Number: SPDK bdev Controller 00:19:05.924 Firmware Version: 24.05 00:19:05.924 Recommended Arb Burst: 6 00:19:05.924 IEEE OUI Identifier: e4 d2 5c 00:19:05.924 Multi-path I/O 00:19:05.924 May have multiple subsystem ports: Yes 00:19:05.924 May have multiple controllers: Yes 00:19:05.924 Associated with SR-IOV VF: No 00:19:05.924 Max Data Transfer Size: 131072 00:19:05.924 Max Number of Namespaces: 32 00:19:05.924 Max Number of I/O Queues: 127 00:19:05.924 NVMe Specification Version (VS): 1.3 00:19:05.924 NVMe Specification Version (Identify): 1.3 00:19:05.924 Maximum Queue Entries: 128 00:19:05.924 Contiguous Queues Required: Yes 00:19:05.924 Arbitration Mechanisms Supported 00:19:05.924 Weighted Round Robin: Not Supported 00:19:05.924 Vendor Specific: Not Supported 00:19:05.924 Reset Timeout: 15000 ms 00:19:05.924 Doorbell Stride: 4 bytes 00:19:05.924 NVM Subsystem Reset: Not Supported 00:19:05.924 Command Sets Supported 00:19:05.924 NVM Command Set: Supported 00:19:05.924 Boot Partition: Not Supported 00:19:05.924 Memory Page Size Minimum: 4096 bytes 00:19:05.924 Memory Page Size Maximum: 4096 bytes 00:19:05.924 Persistent Memory Region: Not Supported 00:19:05.924 Optional Asynchronous Events Supported 00:19:05.924 Namespace Attribute Notices: Supported 00:19:05.924 Firmware Activation Notices: Not Supported 00:19:05.924 ANA Change Notices: Not Supported 00:19:05.924 PLE Aggregate Log Change Notices: Not Supported 00:19:05.924 LBA Status Info Alert Notices: Not Supported 00:19:05.924 EGE Aggregate Log Change Notices: Not Supported 00:19:05.924 Normal NVM Subsystem Shutdown event: Not Supported 00:19:05.924 Zone Descriptor Change Notices: Not Supported 00:19:05.924 Discovery Log Change Notices: Not Supported 00:19:05.924 Controller Attributes 00:19:05.924 128-bit Host Identifier: Supported 00:19:05.924 Non-Operational Permissive Mode: Not Supported 00:19:05.924 NVM Sets: Not Supported 00:19:05.924 Read Recovery Levels: Not Supported 00:19:05.924 Endurance Groups: Not Supported 00:19:05.924 Predictable Latency Mode: Not Supported 00:19:05.924 Traffic Based Keep ALive: Not Supported 00:19:05.924 Namespace Granularity: Not Supported 00:19:05.924 SQ Associations: Not Supported 00:19:05.924 UUID List: Not Supported 00:19:05.924 Multi-Domain Subsystem: Not Supported 00:19:05.924 Fixed Capacity Management: Not Supported 00:19:05.924 Variable Capacity Management: Not Supported 00:19:05.924 Delete Endurance Group: Not Supported 00:19:05.924 Delete NVM Set: Not Supported 00:19:05.924 Extended LBA Formats Supported: Not Supported 00:19:05.924 Flexible Data Placement Supported: Not Supported 00:19:05.924 00:19:05.924 Controller Memory Buffer Support 00:19:05.924 ================================ 00:19:05.924 Supported: No 00:19:05.924 00:19:05.924 Persistent Memory Region Support 00:19:05.924 ================================ 00:19:05.924 Supported: No 00:19:05.924 00:19:05.924 Admin Command Set Attributes 00:19:05.924 ============================ 00:19:05.924 Security Send/Receive: Not Supported 00:19:05.924 Format NVM: Not Supported 00:19:05.924 Firmware Activate/Download: Not Supported 00:19:05.924 Namespace Management: Not Supported 00:19:05.924 Device Self-Test: Not Supported 00:19:05.924 Directives: Not Supported 00:19:05.924 NVMe-MI: Not Supported 00:19:05.924 Virtualization Management: Not Supported 00:19:05.924 Doorbell Buffer Config: Not Supported 00:19:05.924 Get LBA Status Capability: Not Supported 00:19:05.924 Command & Feature Lockdown Capability: Not Supported 00:19:05.924 Abort Command Limit: 4 00:19:05.924 Async Event Request Limit: 4 00:19:05.924 Number of Firmware Slots: N/A 00:19:05.924 Firmware Slot 1 Read-Only: N/A 00:19:05.924 Firmware Activation Without Reset: N/A 00:19:05.924 Multiple Update Detection Support: N/A 00:19:05.924 Firmware Update Granularity: No Information Provided 00:19:05.924 Per-Namespace SMART Log: No 00:19:05.924 Asymmetric Namespace Access Log Page: Not Supported 00:19:05.924 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:05.924 Command Effects Log Page: Supported 00:19:05.924 Get Log Page Extended Data: Supported 00:19:05.924 Telemetry Log Pages: Not Supported 00:19:05.924 Persistent Event Log Pages: Not Supported 00:19:05.924 Supported Log Pages Log Page: May Support 00:19:05.924 Commands Supported & Effects Log Page: Not Supported 00:19:05.924 Feature Identifiers & Effects Log Page:May Support 00:19:05.924 NVMe-MI Commands & Effects Log Page: May Support 00:19:05.924 Data Area 4 for Telemetry Log: Not Supported 00:19:05.924 Error Log Page Entries Supported: 128 00:19:05.924 Keep Alive: Supported 00:19:05.924 Keep Alive Granularity: 10000 ms 00:19:05.924 00:19:05.924 NVM Command Set Attributes 00:19:05.924 ========================== 00:19:05.924 Submission Queue Entry Size 00:19:05.924 Max: 64 00:19:05.924 Min: 64 00:19:05.924 Completion Queue Entry Size 00:19:05.924 Max: 16 00:19:05.924 Min: 16 00:19:05.924 Number of Namespaces: 32 00:19:05.924 Compare Command: Supported 00:19:05.924 Write Uncorrectable Command: Not Supported 00:19:05.924 Dataset Management Command: Supported 00:19:05.924 Write Zeroes Command: Supported 00:19:05.924 Set Features Save Field: Not Supported 00:19:05.924 Reservations: Supported 00:19:05.924 Timestamp: Not Supported 00:19:05.924 Copy: Supported 00:19:05.924 Volatile Write Cache: Present 00:19:05.924 Atomic Write Unit (Normal): 1 00:19:05.924 Atomic Write Unit (PFail): 1 00:19:05.924 Atomic Compare & Write Unit: 1 00:19:05.924 Fused Compare & Write: Supported 00:19:05.924 Scatter-Gather List 00:19:05.924 SGL Command Set: Supported 00:19:05.924 SGL Keyed: Supported 00:19:05.924 SGL Bit Bucket Descriptor: Not Supported 00:19:05.924 SGL Metadata Pointer: Not Supported 00:19:05.924 Oversized SGL: Not Supported 00:19:05.924 SGL Metadata Address: Not Supported 00:19:05.924 SGL Offset: Supported 00:19:05.924 Transport SGL Data Block: Not Supported 00:19:05.924 Replay Protected Memory Block: Not Supported 00:19:05.924 00:19:05.924 Firmware Slot Information 00:19:05.924 ========================= 00:19:05.924 Active slot: 1 00:19:05.924 Slot 1 Firmware Revision: 24.05 00:19:05.924 00:19:05.924 00:19:05.924 Commands Supported and Effects 00:19:05.924 ============================== 00:19:05.924 Admin Commands 00:19:05.924 -------------- 00:19:05.924 Get Log Page (02h): Supported 00:19:05.924 Identify (06h): Supported 00:19:05.925 Abort (08h): Supported 00:19:05.925 Set Features (09h): Supported 00:19:05.925 Get Features (0Ah): Supported 00:19:05.925 Asynchronous Event Request (0Ch): Supported 00:19:05.925 Keep Alive (18h): Supported 00:19:05.925 I/O Commands 00:19:05.925 ------------ 00:19:05.925 Flush (00h): Supported LBA-Change 00:19:05.925 Write (01h): Supported LBA-Change 00:19:05.925 Read (02h): Supported 00:19:05.925 Compare (05h): Supported 00:19:05.925 Write Zeroes (08h): Supported LBA-Change 00:19:05.925 Dataset Management (09h): Supported LBA-Change 00:19:05.925 Copy (19h): Supported LBA-Change 00:19:05.925 Unknown (79h): Supported LBA-Change 00:19:05.925 Unknown (7Ah): Supported 00:19:05.925 00:19:05.925 Error Log 00:19:05.925 ========= 00:19:05.925 00:19:05.925 Arbitration 00:19:05.925 =========== 00:19:05.925 Arbitration Burst: 1 00:19:05.925 00:19:05.925 Power Management 00:19:05.925 ================ 00:19:05.925 Number of Power States: 1 00:19:05.925 Current Power State: Power State #0 00:19:05.925 Power State #0: 00:19:05.925 Max Power: 0.00 W 00:19:05.925 Non-Operational State: Operational 00:19:05.925 Entry Latency: Not Reported 00:19:05.925 Exit Latency: Not Reported 00:19:05.925 Relative Read Throughput: 0 00:19:05.925 Relative Read Latency: 0 00:19:05.925 Relative Write Throughput: 0 00:19:05.925 Relative Write Latency: 0 00:19:05.925 Idle Power: Not Reported 00:19:05.925 Active Power: Not Reported 00:19:05.925 Non-Operational Permissive Mode: Not Supported 00:19:05.925 00:19:05.925 Health Information 00:19:05.925 ================== 00:19:05.925 Critical Warnings: 00:19:05.925 Available Spare Space: OK 00:19:05.925 Temperature: OK 00:19:05.925 Device Reliability: OK 00:19:05.925 Read Only: No 00:19:05.925 Volatile Memory Backup: OK 00:19:05.925 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:05.925 Temperature Threshold: [2024-05-15 00:55:52.800540] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.800553] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa98e10) 00:19:05.925 [2024-05-15 00:55:52.800566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.925 [2024-05-15 00:55:52.800591] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb197e0, cid 7, qid 0 00:19:05.925 [2024-05-15 00:55:52.800789] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.925 [2024-05-15 00:55:52.800805] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.925 [2024-05-15 00:55:52.800813] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.800820] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb197e0) on tqpair=0xa98e10 00:19:05.925 [2024-05-15 00:55:52.800871] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:05.925 [2024-05-15 00:55:52.800893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.925 [2024-05-15 00:55:52.800906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.925 [2024-05-15 00:55:52.800916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.925 [2024-05-15 00:55:52.800927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.925 [2024-05-15 00:55:52.800961] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.800970] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.800977] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.925 [2024-05-15 00:55:52.800989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.925 [2024-05-15 00:55:52.801013] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.925 [2024-05-15 00:55:52.801159] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.925 [2024-05-15 00:55:52.801175] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.925 [2024-05-15 00:55:52.801183] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801190] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.925 [2024-05-15 00:55:52.801204] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801213] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801226] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.925 [2024-05-15 00:55:52.801239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.925 [2024-05-15 00:55:52.801266] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.925 [2024-05-15 00:55:52.801411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.925 [2024-05-15 00:55:52.801424] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.925 [2024-05-15 00:55:52.801431] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801439] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.925 [2024-05-15 00:55:52.801447] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:05.925 [2024-05-15 00:55:52.801456] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:05.925 [2024-05-15 00:55:52.801473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801482] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801490] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.925 [2024-05-15 00:55:52.801501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.925 [2024-05-15 00:55:52.801523] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.925 [2024-05-15 00:55:52.801658] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.925 [2024-05-15 00:55:52.801674] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.925 [2024-05-15 00:55:52.801682] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801690] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.925 [2024-05-15 00:55:52.801708] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801717] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801725] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.925 [2024-05-15 00:55:52.801736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.925 [2024-05-15 00:55:52.801758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.925 [2024-05-15 00:55:52.801894] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.925 [2024-05-15 00:55:52.801910] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.925 [2024-05-15 00:55:52.801917] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801925] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.925 [2024-05-15 00:55:52.801953] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.801971] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.925 [2024-05-15 00:55:52.801983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.925 [2024-05-15 00:55:52.802006] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.925 [2024-05-15 00:55:52.802129] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.925 [2024-05-15 00:55:52.802142] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.925 [2024-05-15 00:55:52.802149] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.925 [2024-05-15 00:55:52.802157] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.925 [2024-05-15 00:55:52.802179] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802189] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802197] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.926 [2024-05-15 00:55:52.802208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.926 [2024-05-15 00:55:52.802232] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.926 [2024-05-15 00:55:52.802364] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.926 [2024-05-15 00:55:52.802377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.926 [2024-05-15 00:55:52.802385] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802392] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.926 [2024-05-15 00:55:52.802410] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802419] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802427] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.926 [2024-05-15 00:55:52.802438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.926 [2024-05-15 00:55:52.802462] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.926 [2024-05-15 00:55:52.802593] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.926 [2024-05-15 00:55:52.802609] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.926 [2024-05-15 00:55:52.802617] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802624] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.926 [2024-05-15 00:55:52.802642] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802652] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802659] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.926 [2024-05-15 00:55:52.802671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.926 [2024-05-15 00:55:52.802692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.926 [2024-05-15 00:55:52.802829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.926 [2024-05-15 00:55:52.802845] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.926 [2024-05-15 00:55:52.802852] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.926 [2024-05-15 00:55:52.802878] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.802895] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.926 [2024-05-15 00:55:52.802907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.926 [2024-05-15 00:55:52.802928] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.926 [2024-05-15 00:55:52.803067] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.926 [2024-05-15 00:55:52.803081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.926 [2024-05-15 00:55:52.803088] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803096] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.926 [2024-05-15 00:55:52.803113] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803127] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803135] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.926 [2024-05-15 00:55:52.803147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.926 [2024-05-15 00:55:52.803169] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.926 [2024-05-15 00:55:52.803307] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.926 [2024-05-15 00:55:52.803321] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.926 [2024-05-15 00:55:52.803328] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.926 [2024-05-15 00:55:52.803353] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803363] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803370] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.926 [2024-05-15 00:55:52.803382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.926 [2024-05-15 00:55:52.803403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.926 [2024-05-15 00:55:52.803529] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.926 [2024-05-15 00:55:52.803542] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.926 [2024-05-15 00:55:52.803550] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803557] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.926 [2024-05-15 00:55:52.803575] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803584] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803591] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.926 [2024-05-15 00:55:52.803603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.926 [2024-05-15 00:55:52.803625] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.926 [2024-05-15 00:55:52.803756] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.926 [2024-05-15 00:55:52.803769] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.926 [2024-05-15 00:55:52.803776] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803784] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.926 [2024-05-15 00:55:52.803801] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803811] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.803818] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.926 [2024-05-15 00:55:52.803830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.926 [2024-05-15 00:55:52.803851] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.926 [2024-05-15 00:55:52.807943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.926 [2024-05-15 00:55:52.807960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.926 [2024-05-15 00:55:52.807968] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.807975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.926 [2024-05-15 00:55:52.807994] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.808004] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.808032] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa98e10) 00:19:05.926 [2024-05-15 00:55:52.808045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.926 [2024-05-15 00:55:52.808068] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb19260, cid 3, qid 0 00:19:05.926 [2024-05-15 00:55:52.808230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:05.926 [2024-05-15 00:55:52.808244] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:05.926 [2024-05-15 00:55:52.808251] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:05.926 [2024-05-15 00:55:52.808259] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb19260) on tqpair=0xa98e10 00:19:05.926 [2024-05-15 00:55:52.808273] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:19:05.926 0 Kelvin (-273 Celsius) 00:19:05.926 Available Spare: 0% 00:19:05.926 Available Spare Threshold: 0% 00:19:05.926 Life Percentage Used: 0% 00:19:05.926 Data Units Read: 0 00:19:05.926 Data Units Written: 0 00:19:05.926 Host Read Commands: 0 00:19:05.926 Host Write Commands: 0 00:19:05.926 Controller Busy Time: 0 minutes 00:19:05.926 Power Cycles: 0 00:19:05.926 Power On Hours: 0 hours 00:19:05.926 Unsafe Shutdowns: 0 00:19:05.926 Unrecoverable Media Errors: 0 00:19:05.926 Lifetime Error Log Entries: 0 00:19:05.926 Warning Temperature Time: 0 minutes 00:19:05.926 Critical Temperature Time: 0 minutes 00:19:05.926 00:19:05.926 Number of Queues 00:19:05.926 ================ 00:19:05.926 Number of I/O Submission Queues: 127 00:19:05.926 Number of I/O Completion Queues: 127 00:19:05.926 00:19:05.926 Active Namespaces 00:19:05.926 ================= 00:19:05.926 Namespace ID:1 00:19:05.926 Error Recovery Timeout: Unlimited 00:19:05.926 Command Set Identifier: NVM (00h) 00:19:05.926 Deallocate: Supported 00:19:05.926 Deallocated/Unwritten Error: Not Supported 00:19:05.926 Deallocated Read Value: Unknown 00:19:05.926 Deallocate in Write Zeroes: Not Supported 00:19:05.926 Deallocated Guard Field: 0xFFFF 00:19:05.926 Flush: Supported 00:19:05.926 Reservation: Supported 00:19:05.926 Namespace Sharing Capabilities: Multiple Controllers 00:19:05.926 Size (in LBAs): 131072 (0GiB) 00:19:05.926 Capacity (in LBAs): 131072 (0GiB) 00:19:05.926 Utilization (in LBAs): 131072 (0GiB) 00:19:05.926 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:05.926 EUI64: ABCDEF0123456789 00:19:05.926 UUID: df985c20-8db5-4486-a3f8-ef09966e66a8 00:19:05.926 Thin Provisioning: Not Supported 00:19:05.926 Per-NS Atomic Units: Yes 00:19:05.926 Atomic Boundary Size (Normal): 0 00:19:05.926 Atomic Boundary Size (PFail): 0 00:19:05.926 Atomic Boundary Offset: 0 00:19:05.926 Maximum Single Source Range Length: 65535 00:19:05.926 Maximum Copy Length: 65535 00:19:05.926 Maximum Source Range Count: 1 00:19:05.926 NGUID/EUI64 Never Reused: No 00:19:05.926 Namespace Write Protected: No 00:19:05.926 Number of LBA Formats: 1 00:19:05.926 Current LBA Format: LBA Format #00 00:19:05.926 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:05.926 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.927 rmmod nvme_tcp 00:19:05.927 rmmod nvme_fabrics 00:19:05.927 rmmod nvme_keyring 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4047044 ']' 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4047044 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 4047044 ']' 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 4047044 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4047044 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4047044' 00:19:05.927 killing process with pid 4047044 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 4047044 00:19:05.927 [2024-05-15 00:55:52.921060] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:05.927 00:55:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 4047044 00:19:06.187 00:55:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.187 00:55:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.187 00:55:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.187 00:55:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.187 00:55:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.187 00:55:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.187 00:55:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.187 00:55:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.728 00:55:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.728 00:19:08.728 real 0m5.017s 00:19:08.728 user 0m4.205s 00:19:08.728 sys 0m1.578s 00:19:08.728 00:55:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:08.728 00:55:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:08.728 ************************************ 00:19:08.728 END TEST nvmf_identify 00:19:08.728 ************************************ 00:19:08.728 00:55:55 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:08.728 00:55:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:08.728 00:55:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:08.728 00:55:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:08.728 ************************************ 00:19:08.728 START TEST nvmf_perf 00:19:08.728 ************************************ 00:19:08.728 00:55:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:08.728 * Looking for test storage... 00:19:08.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.729 00:55:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:10.110 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:10.110 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:10.110 Found net devices under 0000:08:00.0: cvl_0_0 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.110 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:10.111 Found net devices under 0000:08:00.1: cvl_0_1 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:10.111 00:55:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:10.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:19:10.111 00:19:10.111 --- 10.0.0.2 ping statistics --- 00:19:10.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.111 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:19:10.111 00:19:10.111 --- 10.0.0.1 ping statistics --- 00:19:10.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.111 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4048650 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4048650 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 4048650 ']' 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:10.111 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:10.111 [2024-05-15 00:55:57.112196] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:19:10.111 [2024-05-15 00:55:57.112285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.111 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.370 [2024-05-15 00:55:57.176869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:10.370 [2024-05-15 00:55:57.293797] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.370 [2024-05-15 00:55:57.293860] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.370 [2024-05-15 00:55:57.293877] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.370 [2024-05-15 00:55:57.293890] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.370 [2024-05-15 00:55:57.293902] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.370 [2024-05-15 00:55:57.293984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.370 [2024-05-15 00:55:57.294035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.370 [2024-05-15 00:55:57.294087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.370 [2024-05-15 00:55:57.294090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.370 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:10.370 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:19:10.370 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.370 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.370 00:55:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:10.627 00:55:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.627 00:55:57 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:10.627 00:55:57 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:19:13.910 00:56:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:19:13.910 00:56:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:13.910 00:56:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:19:13.910 00:56:00 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.168 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:14.168 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:19:14.168 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:14.168 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:14.168 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:14.426 [2024-05-15 00:56:01.438061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.426 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.991 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:14.991 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:14.991 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:14.991 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:15.248 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.506 [2024-05-15 00:56:02.461411] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:15.506 [2024-05-15 00:56:02.461688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.506 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:15.764 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:19:15.764 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:19:15.764 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:15.764 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:19:17.134 Initializing NVMe Controllers 00:19:17.134 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:19:17.134 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:19:17.134 Initialization complete. Launching workers. 00:19:17.134 ======================================================== 00:19:17.134 Latency(us) 00:19:17.134 Device Information : IOPS MiB/s Average min max 00:19:17.134 PCIE (0000:84:00.0) NSID 1 from core 0: 65974.18 257.71 484.38 54.95 5357.95 00:19:17.134 ======================================================== 00:19:17.134 Total : 65974.18 257.71 484.38 54.95 5357.95 00:19:17.134 00:19:17.134 00:56:03 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:17.134 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.065 Initializing NVMe Controllers 00:19:18.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:18.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:18.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:18.065 Initialization complete. Launching workers. 00:19:18.065 ======================================================== 00:19:18.065 Latency(us) 00:19:18.065 Device Information : IOPS MiB/s Average min max 00:19:18.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.00 0.30 12905.47 196.50 46746.63 00:19:18.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15234.59 6993.05 47899.82 00:19:18.065 ======================================================== 00:19:18.065 Total : 144.00 0.56 13972.99 196.50 47899.82 00:19:18.065 00:19:18.324 00:56:05 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:18.324 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.699 Initializing NVMe Controllers 00:19:19.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:19.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:19.699 Initialization complete. Launching workers. 00:19:19.699 ======================================================== 00:19:19.699 Latency(us) 00:19:19.699 Device Information : IOPS MiB/s Average min max 00:19:19.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7493.18 29.27 4269.39 493.62 8373.86 00:19:19.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3828.36 14.95 8373.18 5477.64 16977.39 00:19:19.699 ======================================================== 00:19:19.699 Total : 11321.55 44.22 5657.08 493.62 16977.39 00:19:19.699 00:19:19.699 00:56:06 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:19:19.699 00:56:06 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:19:19.699 00:56:06 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:19.699 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.232 Initializing NVMe Controllers 00:19:22.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:22.232 Controller IO queue size 128, less than required. 00:19:22.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:22.232 Controller IO queue size 128, less than required. 00:19:22.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:22.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:22.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:22.232 Initialization complete. Launching workers. 00:19:22.232 ======================================================== 00:19:22.232 Latency(us) 00:19:22.232 Device Information : IOPS MiB/s Average min max 00:19:22.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1061.40 265.35 123894.58 83084.46 172514.27 00:19:22.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.94 143.49 237484.12 67771.62 376644.85 00:19:22.232 ======================================================== 00:19:22.232 Total : 1635.34 408.84 163760.30 67771.62 376644.85 00:19:22.232 00:19:22.232 00:56:08 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:19:22.232 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.232 No valid NVMe controllers or AIO or URING devices found 00:19:22.232 Initializing NVMe Controllers 00:19:22.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:22.232 Controller IO queue size 128, less than required. 00:19:22.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:22.232 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:22.232 Controller IO queue size 128, less than required. 00:19:22.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:22.232 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:19:22.232 WARNING: Some requested NVMe devices were skipped 00:19:22.232 00:56:09 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:19:22.232 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.766 Initializing NVMe Controllers 00:19:24.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:24.766 Controller IO queue size 128, less than required. 00:19:24.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:24.766 Controller IO queue size 128, less than required. 00:19:24.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:24.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:24.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:24.766 Initialization complete. Launching workers. 00:19:24.766 00:19:24.766 ==================== 00:19:24.766 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:24.766 TCP transport: 00:19:24.766 polls: 25496 00:19:24.766 idle_polls: 7564 00:19:24.766 sock_completions: 17932 00:19:24.766 nvme_completions: 3737 00:19:24.766 submitted_requests: 5608 00:19:24.766 queued_requests: 1 00:19:24.766 00:19:24.766 ==================== 00:19:24.766 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:24.766 TCP transport: 00:19:24.766 polls: 26197 00:19:24.766 idle_polls: 9456 00:19:24.766 sock_completions: 16741 00:19:24.766 nvme_completions: 4107 00:19:24.766 submitted_requests: 6144 00:19:24.766 queued_requests: 1 00:19:24.766 ======================================================== 00:19:24.766 Latency(us) 00:19:24.766 Device Information : IOPS MiB/s Average min max 00:19:24.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 932.47 233.12 142353.15 82581.51 192070.76 00:19:24.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1024.82 256.20 125761.77 55671.12 167241.80 00:19:24.766 ======================================================== 00:19:24.766 Total : 1957.28 489.32 133666.05 55671.12 192070.76 00:19:24.766 00:19:24.766 00:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:24.766 00:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.766 00:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:24.766 00:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:24.766 00:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:24.766 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.766 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.024 rmmod nvme_tcp 00:19:25.024 rmmod nvme_fabrics 00:19:25.024 rmmod nvme_keyring 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4048650 ']' 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4048650 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 4048650 ']' 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 4048650 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4048650 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4048650' 00:19:25.024 killing process with pid 4048650 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 4048650 00:19:25.024 [2024-05-15 00:56:11.902542] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:25.024 00:56:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 4048650 00:19:26.925 00:56:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:26.925 00:56:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:26.925 00:56:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:26.925 00:56:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.925 00:56:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.925 00:56:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.925 00:56:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.925 00:56:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.832 00:56:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.832 00:19:28.832 real 0m20.249s 00:19:28.832 user 1m3.812s 00:19:28.832 sys 0m4.479s 00:19:28.832 00:56:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:28.832 00:56:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:28.832 ************************************ 00:19:28.832 END TEST nvmf_perf 00:19:28.832 ************************************ 00:19:28.832 00:56:15 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:28.832 00:56:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:28.832 00:56:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:28.832 00:56:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.832 ************************************ 00:19:28.832 START TEST nvmf_fio_host 00:19:28.832 ************************************ 00:19:28.832 00:56:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:28.832 * Looking for test storage... 00:19:28.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:28.832 00:56:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.832 00:56:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.833 00:56:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:30.263 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:30.263 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:30.263 Found net devices under 0000:08:00.0: cvl_0_0 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:30.263 Found net devices under 0000:08:00.1: cvl_0_1 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.263 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.264 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.264 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.264 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.264 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.264 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:19:30.522 00:19:30.522 --- 10.0.0.2 ping statistics --- 00:19:30.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.522 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:19:30.522 00:19:30.522 --- 10.0.0.1 ping statistics --- 00:19:30.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.522 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=4051644 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 4051644 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 4051644 ']' 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:30.522 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 [2024-05-15 00:56:17.483831] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:19:30.522 [2024-05-15 00:56:17.483919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.522 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.522 [2024-05-15 00:56:17.549403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.781 [2024-05-15 00:56:17.666474] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.781 [2024-05-15 00:56:17.666537] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.781 [2024-05-15 00:56:17.666553] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.781 [2024-05-15 00:56:17.666566] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.781 [2024-05-15 00:56:17.666578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.781 [2024-05-15 00:56:17.666662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.781 [2024-05-15 00:56:17.666718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.781 [2024-05-15 00:56:17.666769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.781 [2024-05-15 00:56:17.666772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.781 [2024-05-15 00:56:17.803583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:30.781 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 Malloc1 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 [2024-05-15 00:56:17.881800] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:31.039 [2024-05-15 00:56:17.882098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:31.039 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:31.300 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:31.300 fio-3.35 00:19:31.300 Starting 1 thread 00:19:31.300 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.830 00:19:33.830 test: (groupid=0, jobs=1): err= 0: pid=4051789: Wed May 15 00:56:20 2024 00:19:33.830 read: IOPS=7419, BW=29.0MiB/s (30.4MB/s)(58.2MiB/2008msec) 00:19:33.830 slat (usec): min=2, max=124, avg= 2.80, stdev= 1.55 00:19:33.830 clat (usec): min=2751, max=16010, avg=9513.82, stdev=761.30 00:19:33.830 lat (usec): min=2771, max=16013, avg=9516.62, stdev=761.18 00:19:33.830 clat percentiles (usec): 00:19:33.830 | 1.00th=[ 7767], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:19:33.830 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:19:33.830 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10683], 00:19:33.830 | 99.00th=[11076], 99.50th=[11338], 99.90th=[14222], 99.95th=[15270], 00:19:33.830 | 99.99th=[16057] 00:19:33.830 bw ( KiB/s): min=28768, max=30144, per=99.97%, avg=29670.00, stdev=623.09, samples=4 00:19:33.830 iops : min= 7192, max= 7536, avg=7417.50, stdev=155.77, samples=4 00:19:33.830 write: IOPS=7394, BW=28.9MiB/s (30.3MB/s)(58.0MiB/2008msec); 0 zone resets 00:19:33.830 slat (usec): min=2, max=112, avg= 2.97, stdev= 1.19 00:19:33.830 clat (usec): min=1296, max=14240, avg=7664.62, stdev=644.67 00:19:33.830 lat (usec): min=1304, max=14242, avg=7667.59, stdev=644.59 00:19:33.830 clat percentiles (usec): 00:19:33.830 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7177], 00:19:33.830 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7832], 00:19:33.830 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:19:33.830 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[12387], 99.95th=[13960], 00:19:33.830 | 99.99th=[14222] 00:19:33.830 bw ( KiB/s): min=29200, max=29936, per=99.97%, avg=29572.00, stdev=342.01, samples=4 00:19:33.830 iops : min= 7300, max= 7484, avg=7393.00, stdev=85.50, samples=4 00:19:33.830 lat (msec) : 2=0.01%, 4=0.12%, 10=87.74%, 20=12.14% 00:19:33.830 cpu : usr=63.28%, sys=32.34%, ctx=74, majf=0, minf=39 00:19:33.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:33.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:33.830 issued rwts: total=14899,14849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:33.830 00:19:33.830 Run status group 0 (all jobs): 00:19:33.830 READ: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=58.2MiB (61.0MB), run=2008-2008msec 00:19:33.830 WRITE: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=58.0MiB (60.8MB), run=2008-2008msec 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:33.830 00:56:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:33.830 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:33.830 fio-3.35 00:19:33.830 Starting 1 thread 00:19:33.830 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.359 00:19:36.359 test: (groupid=0, jobs=1): err= 0: pid=4052125: Wed May 15 00:56:23 2024 00:19:36.359 read: IOPS=6357, BW=99.3MiB/s (104MB/s)(200MiB/2010msec) 00:19:36.359 slat (usec): min=3, max=105, avg= 3.97, stdev= 1.59 00:19:36.359 clat (usec): min=4037, max=24525, avg=11550.69, stdev=2787.22 00:19:36.359 lat (usec): min=4041, max=24528, avg=11554.66, stdev=2787.10 00:19:36.360 clat percentiles (usec): 00:19:36.360 | 1.00th=[ 5800], 5.00th=[ 7177], 10.00th=[ 8225], 20.00th=[ 9372], 00:19:36.360 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:19:36.360 | 70.00th=[12780], 80.00th=[13698], 90.00th=[14877], 95.00th=[16581], 00:19:36.360 | 99.00th=[19792], 99.50th=[20317], 99.90th=[23725], 99.95th=[23987], 00:19:36.360 | 99.99th=[24511] 00:19:36.360 bw ( KiB/s): min=40288, max=64768, per=49.57%, avg=50424.00, stdev=10318.91, samples=4 00:19:36.360 iops : min= 2518, max= 4048, avg=3151.50, stdev=644.93, samples=4 00:19:36.360 write: IOPS=3670, BW=57.3MiB/s (60.1MB/s)(104MiB/1810msec); 0 zone resets 00:19:36.360 slat (usec): min=32, max=201, avg=36.92, stdev= 6.12 00:19:36.360 clat (usec): min=7872, max=30432, avg=15660.17, stdev=4138.49 00:19:36.360 lat (usec): min=7910, max=30470, avg=15697.09, stdev=4136.18 00:19:36.360 clat percentiles (usec): 00:19:36.360 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11731], 00:19:36.360 | 30.00th=[12780], 40.00th=[13698], 50.00th=[15008], 60.00th=[16909], 00:19:36.360 | 70.00th=[18482], 80.00th=[19792], 90.00th=[21365], 95.00th=[22414], 00:19:36.360 | 99.00th=[24249], 99.50th=[25822], 99.90th=[29754], 99.95th=[30016], 00:19:36.360 | 99.99th=[30540] 00:19:36.360 bw ( KiB/s): min=42272, max=68416, per=89.51%, avg=52560.00, stdev=11149.62, samples=4 00:19:36.360 iops : min= 2642, max= 4276, avg=3285.00, stdev=696.85, samples=4 00:19:36.360 lat (msec) : 10=20.44%, 20=72.71%, 50=6.85% 00:19:36.360 cpu : usr=71.63%, sys=23.79%, ctx=43, majf=0, minf=63 00:19:36.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:36.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:36.360 issued rwts: total=12779,6643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:36.360 00:19:36.360 Run status group 0 (all jobs): 00:19:36.360 READ: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=200MiB (209MB), run=2010-2010msec 00:19:36.360 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=104MiB (109MB), run=1810-1810msec 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.360 rmmod nvme_tcp 00:19:36.360 rmmod nvme_fabrics 00:19:36.360 rmmod nvme_keyring 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 4051644 ']' 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 4051644 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 4051644 ']' 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 4051644 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4051644 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4051644' 00:19:36.360 killing process with pid 4051644 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 4051644 00:19:36.360 [2024-05-15 00:56:23.231093] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:36.360 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 4051644 00:19:36.619 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:36.619 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:36.619 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:36.619 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:36.619 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:36.619 00:56:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.619 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.619 00:56:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.525 00:56:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:38.525 00:19:38.525 real 0m9.942s 00:19:38.525 user 0m26.100s 00:19:38.525 sys 0m3.641s 00:19:38.525 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:38.525 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.525 ************************************ 00:19:38.525 END TEST nvmf_fio_host 00:19:38.525 ************************************ 00:19:38.525 00:56:25 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:38.525 00:56:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:38.525 00:56:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:38.525 00:56:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:38.526 ************************************ 00:19:38.526 START TEST nvmf_failover 00:19:38.526 ************************************ 00:19:38.526 00:56:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:38.785 * Looking for test storage... 00:19:38.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:19:38.785 00:56:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:40.167 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:40.167 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:40.167 Found net devices under 0000:08:00.0: cvl_0_0 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.167 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:40.168 Found net devices under 0000:08:00.1: cvl_0_1 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.168 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:40.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:19:40.426 00:19:40.426 --- 10.0.0.2 ping statistics --- 00:19:40.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.426 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:19:40.426 00:19:40.426 --- 10.0.0.1 ping statistics --- 00:19:40.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.426 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=4053816 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 4053816 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 4053816 ']' 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:40.426 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:40.426 [2024-05-15 00:56:27.386249] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:19:40.426 [2024-05-15 00:56:27.386349] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.426 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.426 [2024-05-15 00:56:27.451482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:40.685 [2024-05-15 00:56:27.567619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.685 [2024-05-15 00:56:27.567678] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.685 [2024-05-15 00:56:27.567694] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.685 [2024-05-15 00:56:27.567707] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.685 [2024-05-15 00:56:27.567719] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.685 [2024-05-15 00:56:27.567803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.685 [2024-05-15 00:56:27.567855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.685 [2024-05-15 00:56:27.567858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.685 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:40.685 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:19:40.685 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.685 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.685 00:56:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:40.685 00:56:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.685 00:56:27 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:40.944 [2024-05-15 00:56:27.973863] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.944 00:56:27 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:41.510 Malloc0 00:19:41.510 00:56:28 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:41.768 00:56:28 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:42.027 00:56:28 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.285 [2024-05-15 00:56:29.174242] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:42.285 [2024-05-15 00:56:29.174581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.285 00:56:29 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:42.543 [2024-05-15 00:56:29.467290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:42.543 00:56:29 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:42.801 [2024-05-15 00:56:29.760288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4054054 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4054054 /var/tmp/bdevperf.sock 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 4054054 ']' 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:42.801 00:56:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:43.059 00:56:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:43.059 00:56:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:19:43.059 00:56:30 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:43.625 NVMe0n1 00:19:43.625 00:56:30 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:43.883 00:19:43.883 00:56:30 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4054160 00:19:43.883 00:56:30 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.883 00:56:30 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:45.260 00:56:31 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:45.260 [2024-05-15 00:56:32.189509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 [2024-05-15 00:56:32.189846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d1b0 is same with the state(5) to be set 00:19:45.260 00:56:32 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:48.541 00:56:35 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:48.541 00:19:48.541 00:56:35 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:48.799 [2024-05-15 00:56:35.816040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.799 [2024-05-15 00:56:35.816117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.799 [2024-05-15 00:56:35.816134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.799 [2024-05-15 00:56:35.816148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.799 [2024-05-15 00:56:35.816161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.799 [2024-05-15 00:56:35.816175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.799 [2024-05-15 00:56:35.816188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.799 [2024-05-15 00:56:35.816201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.799 [2024-05-15 00:56:35.816214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.799 [2024-05-15 00:56:35.816227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 [2024-05-15 00:56:35.816736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e8d0 is same with the state(5) to be set 00:19:48.800 00:56:35 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:52.081 00:56:38 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.081 [2024-05-15 00:56:39.114006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.081 00:56:39 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:53.456 00:56:40 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:53.456 [2024-05-15 00:56:40.414540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69efb0 is same with the state(5) to be set 00:19:53.456 [2024-05-15 00:56:40.414622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69efb0 is same with the state(5) to be set 00:19:53.456 [2024-05-15 00:56:40.414639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69efb0 is same with the state(5) to be set 00:19:53.456 [2024-05-15 00:56:40.414670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69efb0 is same with the state(5) to be set 00:19:53.456 [2024-05-15 00:56:40.414691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69efb0 is same with the state(5) to be set 00:19:53.456 [2024-05-15 00:56:40.414705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69efb0 is same with the state(5) to be set 00:19:53.456 [2024-05-15 00:56:40.414718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69efb0 is same with the state(5) to be set 00:19:53.456 [2024-05-15 00:56:40.414731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69efb0 is same with the state(5) to be set 00:19:53.456 00:56:40 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 4054160 00:20:00.024 0 00:20:00.024 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 4054054 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 4054054 ']' 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 4054054 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4054054 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4054054' 00:20:00.025 killing process with pid 4054054 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 4054054 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 4054054 00:20:00.025 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:00.025 [2024-05-15 00:56:29.826993] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:00.025 [2024-05-15 00:56:29.827110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054054 ] 00:20:00.025 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.025 [2024-05-15 00:56:29.887433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.025 [2024-05-15 00:56:30.004690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.025 Running I/O for 15 seconds... 00:20:00.025 [2024-05-15 00:56:32.190154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.025 [2024-05-15 00:56:32.190198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.025 [2024-05-15 00:56:32.190245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.025 [2024-05-15 00:56:32.190278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.025 [2024-05-15 00:56:32.190310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.190981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.190996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.191013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.191027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.191044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.191058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.191075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.191089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.191106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.191120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.191136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.191151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.191168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.191183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.191199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.191214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.191230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.191245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.025 [2024-05-15 00:56:32.191262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.025 [2024-05-15 00:56:32.191276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.191835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.191869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.191902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.191942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.191976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.191993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.192008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.192040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.192071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.026 [2024-05-15 00:56:32.192393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.192424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.192456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.192488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.192519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.192551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.026 [2024-05-15 00:56:32.192582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.026 [2024-05-15 00:56:32.192603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.192618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.192649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.027 [2024-05-15 00:56:32.192681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.027 [2024-05-15 00:56:32.192713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.027 [2024-05-15 00:56:32.192744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.027 [2024-05-15 00:56:32.192776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.192808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.192840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.192872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.192903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.192944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.192978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.192995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.027 [2024-05-15 00:56:32.193925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.027 [2024-05-15 00:56:32.193949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.193965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.193981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.193996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:32.194320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf8c0 is same with the state(5) to be set 00:20:00.028 [2024-05-15 00:56:32.194356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.028 [2024-05-15 00:56:32.194368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.028 [2024-05-15 00:56:32.194381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69760 len:8 PRP1 0x0 PRP2 0x0 00:20:00.028 [2024-05-15 00:56:32.194395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194456] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18cf8c0 was disconnected and freed. reset controller. 00:20:00.028 [2024-05-15 00:56:32.194487] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:00.028 [2024-05-15 00:56:32.194526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.028 [2024-05-15 00:56:32.194545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.028 [2024-05-15 00:56:32.194575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.028 [2024-05-15 00:56:32.194604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.028 [2024-05-15 00:56:32.194633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:32.194647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.028 [2024-05-15 00:56:32.198763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:00.028 [2024-05-15 00:56:32.198806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0d90 (9): Bad file descriptor 00:20:00.028 [2024-05-15 00:56:32.338963] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:00.028 [2024-05-15 00:56:35.819592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.819975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.819990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.820007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.820023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.820040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.820056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.820073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.820088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.820108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.820123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.820139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.820154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.820170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.820185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.820202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.820216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.820233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.820247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.028 [2024-05-15 00:56:35.820263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.028 [2024-05-15 00:56:35.820277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.820982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.820999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.821014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.821045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.821076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.821107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.821139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.821176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.821208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.029 [2024-05-15 00:56:35.821239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.029 [2024-05-15 00:56:35.821272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.029 [2024-05-15 00:56:35.821303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.029 [2024-05-15 00:56:35.821324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.821968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.821983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.030 [2024-05-15 00:56:35.822604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.030 [2024-05-15 00:56:35.822619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.822970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.822987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.823002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.823032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.823063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.823093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.823124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.823154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.823184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.823222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.031 [2024-05-15 00:56:35.823254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14904 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14920 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14928 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14936 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14952 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14960 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14968 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14984 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14992 len:8 PRP1 0x0 PRP2 0x0 00:20:00.031 [2024-05-15 00:56:35.823940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.031 [2024-05-15 00:56:35.823957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.031 [2024-05-15 00:56:35.823969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.031 [2024-05-15 00:56:35.823981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15000 len:8 PRP1 0x0 PRP2 0x0 00:20:00.032 [2024-05-15 00:56:35.823995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:35.824009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.032 [2024-05-15 00:56:35.824021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.032 [2024-05-15 00:56:35.824034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:8 PRP1 0x0 PRP2 0x0 00:20:00.032 [2024-05-15 00:56:35.824048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:35.824062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.032 [2024-05-15 00:56:35.824073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.032 [2024-05-15 00:56:35.824086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15016 len:8 PRP1 0x0 PRP2 0x0 00:20:00.032 [2024-05-15 00:56:35.824099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:35.824158] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a7a170 was disconnected and freed. reset controller. 00:20:00.032 [2024-05-15 00:56:35.824181] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:00.032 [2024-05-15 00:56:35.824223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.032 [2024-05-15 00:56:35.824242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:35.824258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.032 [2024-05-15 00:56:35.824273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:35.824296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.032 [2024-05-15 00:56:35.824316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:35.824331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.032 [2024-05-15 00:56:35.824345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:35.824359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.032 [2024-05-15 00:56:35.824419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0d90 (9): Bad file descriptor 00:20:00.032 [2024-05-15 00:56:35.828491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:00.032 [2024-05-15 00:56:35.905374] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:00.032 [2024-05-15 00:56:40.418685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.418731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.418760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.418777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.418795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.418811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.418828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.418844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.418861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.418875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.418893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.418908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.418925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.418953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.418971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.418986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.419018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.032 [2024-05-15 00:56:40.419054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.032 [2024-05-15 00:56:40.419682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.032 [2024-05-15 00:56:40.419698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.419713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.419730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.419744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.419761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.419775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.419791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.419806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.419822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.419840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.419856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.419871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.419887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.419902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.419918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.419940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.419958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.419973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.419990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:00.033 [2024-05-15 00:56:40.420565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.033 [2024-05-15 00:56:40.420615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40776 len:8 PRP1 0x0 PRP2 0x0 00:20:00.033 [2024-05-15 00:56:40.420630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.033 [2024-05-15 00:56:40.420666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.033 [2024-05-15 00:56:40.420678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40784 len:8 PRP1 0x0 PRP2 0x0 00:20:00.033 [2024-05-15 00:56:40.420693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.033 [2024-05-15 00:56:40.420719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.033 [2024-05-15 00:56:40.420731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40792 len:8 PRP1 0x0 PRP2 0x0 00:20:00.033 [2024-05-15 00:56:40.420746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.033 [2024-05-15 00:56:40.420772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.033 [2024-05-15 00:56:40.420784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40800 len:8 PRP1 0x0 PRP2 0x0 00:20:00.033 [2024-05-15 00:56:40.420798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.033 [2024-05-15 00:56:40.420812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.033 [2024-05-15 00:56:40.420824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.420836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40808 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.420850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.420864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.420876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.420889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40816 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.420902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.420917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.420929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.420949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40824 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.420963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.420978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.420990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40832 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40840 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40848 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40856 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40864 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40872 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40880 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40888 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40896 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40904 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40912 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40920 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40928 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40936 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40944 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40952 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40968 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.421955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.421967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.421980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40976 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.421994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.422009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.422021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.422033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40984 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.422047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.422061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.422074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.034 [2024-05-15 00:56:40.422086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40992 len:8 PRP1 0x0 PRP2 0x0 00:20:00.034 [2024-05-15 00:56:40.422100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.034 [2024-05-15 00:56:40.422116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.034 [2024-05-15 00:56:40.422128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41000 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41008 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41016 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41024 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41032 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41040 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41048 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41056 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41064 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41072 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41080 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41088 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41096 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41104 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41112 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.422950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41120 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.422964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.422979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.422991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.423004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41128 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.423017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.423031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.423043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.423056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41136 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.423071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.423085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.423097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.423109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41144 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.423123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.423137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.423149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.423161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41152 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.423179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.423194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.423205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.423218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41160 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.423232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.423246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.423258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.423270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41168 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.423284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.423299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.423310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.423323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41176 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.423336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.423350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.423363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.035 [2024-05-15 00:56:40.423376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41184 len:8 PRP1 0x0 PRP2 0x0 00:20:00.035 [2024-05-15 00:56:40.423391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.035 [2024-05-15 00:56:40.423406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.035 [2024-05-15 00:56:40.423418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41192 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41200 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41208 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41216 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41224 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41232 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41240 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41248 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41256 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41264 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.423946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.423961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.423974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.423986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41272 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.424000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.424033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.424047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41280 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.424062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.424090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.424103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41288 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.424117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.424145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.424158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41296 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.424172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.424199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.424211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41304 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.424225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.424251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.424264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41312 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.424278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.424305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.424317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41320 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.424331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:00.036 [2024-05-15 00:56:40.424363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:00.036 [2024-05-15 00:56:40.424375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41328 len:8 PRP1 0x0 PRP2 0x0 00:20:00.036 [2024-05-15 00:56:40.424390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424451] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18d41a0 was disconnected and freed. reset controller. 00:20:00.036 [2024-05-15 00:56:40.424474] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:00.036 [2024-05-15 00:56:40.424511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.036 [2024-05-15 00:56:40.424534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.036 [2024-05-15 00:56:40.424568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.036 [2024-05-15 00:56:40.424600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.036 [2024-05-15 00:56:40.424630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.036 [2024-05-15 00:56:40.424645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.036 [2024-05-15 00:56:40.424701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0d90 (9): Bad file descriptor 00:20:00.036 [2024-05-15 00:56:40.428753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:00.036 [2024-05-15 00:56:40.504118] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:00.036 00:20:00.036 Latency(us) 00:20:00.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.036 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:00.036 Verification LBA range: start 0x0 length 0x4000 00:20:00.036 NVMe0n1 : 15.01 7052.69 27.55 580.21 0.00 16735.28 970.90 19709.35 00:20:00.036 =================================================================================================================== 00:20:00.036 Total : 7052.69 27.55 580.21 0.00 16735.28 970.90 19709.35 00:20:00.036 Received shutdown signal, test time was about 15.000000 seconds 00:20:00.036 00:20:00.036 Latency(us) 00:20:00.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.036 =================================================================================================================== 00:20:00.036 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4055561 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4055561 /var/tmp/bdevperf.sock 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 4055561 ']' 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:00.037 [2024-05-15 00:56:46.895598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:00.037 00:56:46 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:00.295 [2024-05-15 00:56:47.148326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:00.295 00:56:47 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:00.553 NVMe0n1 00:20:00.553 00:56:47 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:01.119 00:20:01.119 00:56:47 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:01.378 00:20:01.378 00:56:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:01.378 00:56:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:01.636 00:56:48 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:01.894 00:56:48 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:05.176 00:56:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:05.176 00:56:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:05.176 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4056069 00:20:05.176 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 4056069 00:20:05.176 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.551 0 00:20:06.551 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:06.551 [2024-05-15 00:56:46.384438] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:06.551 [2024-05-15 00:56:46.384543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4055561 ] 00:20:06.551 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.551 [2024-05-15 00:56:46.444994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.551 [2024-05-15 00:56:46.560885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.551 [2024-05-15 00:56:48.743064] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:06.552 [2024-05-15 00:56:48.743145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.552 [2024-05-15 00:56:48.743170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.552 [2024-05-15 00:56:48.743199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.552 [2024-05-15 00:56:48.743215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.552 [2024-05-15 00:56:48.743230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.552 [2024-05-15 00:56:48.743245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.552 [2024-05-15 00:56:48.743266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.552 [2024-05-15 00:56:48.743286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.552 [2024-05-15 00:56:48.743311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.552 [2024-05-15 00:56:48.743374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.552 [2024-05-15 00:56:48.743418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42d90 (9): Bad file descriptor 00:20:06.552 [2024-05-15 00:56:48.751347] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:06.552 Running I/O for 1 seconds... 00:20:06.552 00:20:06.552 Latency(us) 00:20:06.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.552 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:06.552 Verification LBA range: start 0x0 length 0x4000 00:20:06.552 NVMe0n1 : 1.01 7329.04 28.63 0.00 0.00 17384.04 2172.40 21359.88 00:20:06.552 =================================================================================================================== 00:20:06.552 Total : 7329.04 28.63 0.00 0.00 17384.04 2172.40 21359.88 00:20:06.552 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:06.552 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:06.552 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:06.824 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:06.824 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:07.105 00:56:54 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:07.365 00:56:54 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 4055561 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 4055561 ']' 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 4055561 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4055561 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4055561' 00:20:10.643 killing process with pid 4055561 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 4055561 00:20:10.643 00:56:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 4055561 00:20:10.902 00:56:57 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:10.902 00:56:57 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.161 00:56:58 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:11.161 00:56:58 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:11.161 00:56:58 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:11.161 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.161 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:20:11.161 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.161 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:20:11.161 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.161 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.161 rmmod nvme_tcp 00:20:11.161 rmmod nvme_fabrics 00:20:11.420 rmmod nvme_keyring 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 4053816 ']' 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 4053816 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 4053816 ']' 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 4053816 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4053816 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4053816' 00:20:11.420 killing process with pid 4053816 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 4053816 00:20:11.420 [2024-05-15 00:56:58.273917] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:11.420 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 4053816 00:20:11.680 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.680 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.680 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.680 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.680 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.680 00:56:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.680 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.680 00:56:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.586 00:57:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.586 00:20:13.586 real 0m34.968s 00:20:13.586 user 2m3.888s 00:20:13.586 sys 0m5.956s 00:20:13.586 00:57:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:13.586 00:57:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:13.586 ************************************ 00:20:13.586 END TEST nvmf_failover 00:20:13.586 ************************************ 00:20:13.586 00:57:00 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:13.586 00:57:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:13.586 00:57:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:13.586 00:57:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.586 ************************************ 00:20:13.586 START TEST nvmf_host_discovery 00:20:13.586 ************************************ 00:20:13.586 00:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:13.846 * Looking for test storage... 00:20:13.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.846 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.847 00:57:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:20:15.225 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:15.226 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:15.226 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:15.226 Found net devices under 0000:08:00.0: cvl_0_0 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:15.226 Found net devices under 0000:08:00.1: cvl_0_1 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:15.226 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.484 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:15.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:20:15.485 00:20:15.485 --- 10.0.0.2 ping statistics --- 00:20:15.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.485 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:20:15.485 00:20:15.485 --- 10.0.0.1 ping statistics --- 00:20:15.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.485 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=4058218 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 4058218 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 4058218 ']' 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:15.485 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.485 [2024-05-15 00:57:02.402756] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:15.485 [2024-05-15 00:57:02.402850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.485 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.485 [2024-05-15 00:57:02.469330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.743 [2024-05-15 00:57:02.588411] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.743 [2024-05-15 00:57:02.588475] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.743 [2024-05-15 00:57:02.588491] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.743 [2024-05-15 00:57:02.588504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.743 [2024-05-15 00:57:02.588515] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.743 [2024-05-15 00:57:02.588552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.743 [2024-05-15 00:57:02.728573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.743 [2024-05-15 00:57:02.736502] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:15.743 [2024-05-15 00:57:02.736745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.743 null0 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.743 null1 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4058306 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4058306 /tmp/host.sock 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 4058306 ']' 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:15.743 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:15.743 00:57:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.001 [2024-05-15 00:57:02.813079] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:16.001 [2024-05-15 00:57:02.813173] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058306 ] 00:20:16.001 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.001 [2024-05-15 00:57:02.872917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.001 [2024-05-15 00:57:02.989323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:16.260 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.518 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:16.518 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:16.518 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:16.518 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:16.518 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.518 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.518 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.519 [2024-05-15 00:57:03.386446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:20:16.519 00:57:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:20:17.450 [2024-05-15 00:57:04.158160] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:17.451 [2024-05-15 00:57:04.158199] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:17.451 [2024-05-15 00:57:04.158226] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:17.451 [2024-05-15 00:57:04.244482] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:17.451 [2024-05-15 00:57:04.347224] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:17.451 [2024-05-15 00:57:04.347258] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:17.709 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.968 [2024-05-15 00:57:04.850673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:17.968 [2024-05-15 00:57:04.851817] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:17.968 [2024-05-15 00:57:04.851861] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.968 [2024-05-15 00:57:04.939118] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:20:17.968 00:57:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:20:17.969 [2024-05-15 00:57:04.999767] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:17.969 [2024-05-15 00:57:04.999794] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:17.969 [2024-05-15 00:57:04.999806] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:19.343 00:57:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:19.343 00:57:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:19.343 00:57:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:20:19.343 00:57:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:19.343 00:57:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:19.343 00:57:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.343 00:57:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:19.343 00:57:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:19.343 00:57:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.343 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:19.343 [2024-05-15 00:57:06.087278] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:19.344 [2024-05-15 00:57:06.087327] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:19.344 [2024-05-15 00:57:06.088636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.344 [2024-05-15 00:57:06.088673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.344 [2024-05-15 00:57:06.088693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.344 [2024-05-15 00:57:06.088708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.344 [2024-05-15 00:57:06.088724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.344 [2024-05-15 00:57:06.088747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.344 [2024-05-15 00:57:06.088763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.344 [2024-05-15 00:57:06.088777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.344 [2024-05-15 00:57:06.088792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:19.344 [2024-05-15 00:57:06.098642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.344 [2024-05-15 00:57:06.108689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.344 [2024-05-15 00:57:06.108993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.109250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.109294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.344 [2024-05-15 00:57:06.109315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.344 [2024-05-15 00:57:06.109342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.344 [2024-05-15 00:57:06.109383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.344 [2024-05-15 00:57:06.109403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.344 [2024-05-15 00:57:06.109422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.344 [2024-05-15 00:57:06.109446] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.344 [2024-05-15 00:57:06.118774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.344 [2024-05-15 00:57:06.119009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.119237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.119279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.344 [2024-05-15 00:57:06.119298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.344 [2024-05-15 00:57:06.119331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.344 [2024-05-15 00:57:06.119355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.344 [2024-05-15 00:57:06.119370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.344 [2024-05-15 00:57:06.119385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.344 [2024-05-15 00:57:06.119408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.344 [2024-05-15 00:57:06.128854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.344 [2024-05-15 00:57:06.129091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.129257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.129286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.344 [2024-05-15 00:57:06.129304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.344 [2024-05-15 00:57:06.129330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.344 [2024-05-15 00:57:06.129354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.344 [2024-05-15 00:57:06.129369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.344 [2024-05-15 00:57:06.129384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.344 [2024-05-15 00:57:06.129407] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:19.344 [2024-05-15 00:57:06.138937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.344 [2024-05-15 00:57:06.139113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:19.344 [2024-05-15 00:57:06.140180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:19.344 [2024-05-15 00:57:06.140213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.344 [2024-05-15 00:57:06.140233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.344 [2024-05-15 00:57:06.140262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.344 [2024-05-15 00:57:06.140322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.344 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:19.344 [2024-05-15 00:57:06.140344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.344 [2024-05-15 00:57:06.140363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.344 [2024-05-15 00:57:06.140385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.344 [2024-05-15 00:57:06.149020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.344 [2024-05-15 00:57:06.149218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.149415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.149443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.344 [2024-05-15 00:57:06.149460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.344 [2024-05-15 00:57:06.149484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.344 [2024-05-15 00:57:06.149520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.344 [2024-05-15 00:57:06.149539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.344 [2024-05-15 00:57:06.149554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.344 [2024-05-15 00:57:06.149575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.344 [2024-05-15 00:57:06.159100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.344 [2024-05-15 00:57:06.159263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.159463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.344 [2024-05-15 00:57:06.159490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.344 [2024-05-15 00:57:06.159507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.344 [2024-05-15 00:57:06.159531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.344 [2024-05-15 00:57:06.159568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.344 [2024-05-15 00:57:06.159587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.345 [2024-05-15 00:57:06.159603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.345 [2024-05-15 00:57:06.159624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.345 [2024-05-15 00:57:06.169177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.345 [2024-05-15 00:57:06.169374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.169574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.169601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.345 [2024-05-15 00:57:06.169618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.345 [2024-05-15 00:57:06.169643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.345 [2024-05-15 00:57:06.169684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.345 [2024-05-15 00:57:06.169703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.345 [2024-05-15 00:57:06.169718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.345 [2024-05-15 00:57:06.169740] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.345 [2024-05-15 00:57:06.179254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.345 [2024-05-15 00:57:06.179462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.179649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.179677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.345 [2024-05-15 00:57:06.179702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.345 [2024-05-15 00:57:06.179726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.345 [2024-05-15 00:57:06.179764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.345 [2024-05-15 00:57:06.179783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.345 [2024-05-15 00:57:06.179798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.345 [2024-05-15 00:57:06.179820] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:19.345 [2024-05-15 00:57:06.189338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.345 [2024-05-15 00:57:06.189531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.189705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.189733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.345 [2024-05-15 00:57:06.189750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.345 [2024-05-15 00:57:06.189780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.345 [2024-05-15 00:57:06.189819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.345 [2024-05-15 00:57:06.189838] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.345 [2024-05-15 00:57:06.189853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.345 [2024-05-15 00:57:06.189875] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.345 [2024-05-15 00:57:06.199419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.345 [2024-05-15 00:57:06.199612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.199782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.199809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.345 [2024-05-15 00:57:06.199827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.345 [2024-05-15 00:57:06.199850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.345 [2024-05-15 00:57:06.199888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.345 [2024-05-15 00:57:06.199907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.345 [2024-05-15 00:57:06.199923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.345 [2024-05-15 00:57:06.199954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.345 [2024-05-15 00:57:06.209499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.345 [2024-05-15 00:57:06.209704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.209909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.345 [2024-05-15 00:57:06.209953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fbf40 with addr=10.0.0.2, port=4420 00:20:19.345 [2024-05-15 00:57:06.209978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbf40 is same with the state(5) to be set 00:20:19.345 [2024-05-15 00:57:06.210003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fbf40 (9): Bad file descriptor 00:20:19.345 [2024-05-15 00:57:06.210039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.345 [2024-05-15 00:57:06.210059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.345 [2024-05-15 00:57:06.210074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.345 [2024-05-15 00:57:06.210096] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.345 [2024-05-15 00:57:06.214051] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:19.345 [2024-05-15 00:57:06.214083] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:20:19.345 00:57:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:20.279 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.537 00:57:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.468 [2024-05-15 00:57:08.519110] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:21.468 [2024-05-15 00:57:08.519142] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:21.468 [2024-05-15 00:57:08.519166] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:21.726 [2024-05-15 00:57:08.645574] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:20:21.726 [2024-05-15 00:57:08.753637] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:21.726 [2024-05-15 00:57:08.753680] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:21.726 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.727 request: 00:20:21.727 { 00:20:21.727 "name": "nvme", 00:20:21.727 "trtype": "tcp", 00:20:21.727 "traddr": "10.0.0.2", 00:20:21.727 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:21.727 "adrfam": "ipv4", 00:20:21.727 "trsvcid": "8009", 00:20:21.727 "wait_for_attach": true, 00:20:21.727 "method": "bdev_nvme_start_discovery", 00:20:21.727 "req_id": 1 00:20:21.727 } 00:20:21.727 Got JSON-RPC error response 00:20:21.727 response: 00:20:21.727 { 00:20:21.727 "code": -17, 00:20:21.727 "message": "File exists" 00:20:21.727 } 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:21.727 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.985 request: 00:20:21.985 { 00:20:21.985 "name": "nvme_second", 00:20:21.985 "trtype": "tcp", 00:20:21.985 "traddr": "10.0.0.2", 00:20:21.985 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:21.985 "adrfam": "ipv4", 00:20:21.985 "trsvcid": "8009", 00:20:21.985 "wait_for_attach": true, 00:20:21.985 "method": "bdev_nvme_start_discovery", 00:20:21.985 "req_id": 1 00:20:21.985 } 00:20:21.985 Got JSON-RPC error response 00:20:21.985 response: 00:20:21.985 { 00:20:21.985 "code": -17, 00:20:21.985 "message": "File exists" 00:20:21.985 } 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.985 00:57:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:22.917 [2024-05-15 00:57:09.962137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.917 [2024-05-15 00:57:09.962361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.917 [2024-05-15 00:57:09.962389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1518220 with addr=10.0.0.2, port=8010 00:20:22.917 [2024-05-15 00:57:09.962417] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:22.917 [2024-05-15 00:57:09.962436] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:22.917 [2024-05-15 00:57:09.962451] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:24.290 [2024-05-15 00:57:10.964569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.290 [2024-05-15 00:57:10.964798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.291 [2024-05-15 00:57:10.964826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1518220 with addr=10.0.0.2, port=8010 00:20:24.291 [2024-05-15 00:57:10.964854] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:24.291 [2024-05-15 00:57:10.964871] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:24.291 [2024-05-15 00:57:10.964886] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:25.224 [2024-05-15 00:57:11.966730] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:20:25.224 request: 00:20:25.224 { 00:20:25.224 "name": "nvme_second", 00:20:25.224 "trtype": "tcp", 00:20:25.224 "traddr": "10.0.0.2", 00:20:25.224 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:25.224 "adrfam": "ipv4", 00:20:25.224 "trsvcid": "8010", 00:20:25.224 "attach_timeout_ms": 3000, 00:20:25.224 "method": "bdev_nvme_start_discovery", 00:20:25.224 "req_id": 1 00:20:25.224 } 00:20:25.224 Got JSON-RPC error response 00:20:25.224 response: 00:20:25.224 { 00:20:25.224 "code": -110, 00:20:25.224 "message": "Connection timed out" 00:20:25.224 } 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:25.224 00:57:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4058306 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.224 rmmod nvme_tcp 00:20:25.224 rmmod nvme_fabrics 00:20:25.224 rmmod nvme_keyring 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 4058218 ']' 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 4058218 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 4058218 ']' 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 4058218 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4058218 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4058218' 00:20:25.224 killing process with pid 4058218 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 4058218 00:20:25.224 [2024-05-15 00:57:12.126863] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:25.224 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 4058218 00:20:25.484 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.484 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.484 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.484 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.484 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.484 00:57:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.484 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.484 00:57:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.394 00:57:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.394 00:20:27.394 real 0m13.796s 00:20:27.394 user 0m21.101s 00:20:27.394 sys 0m2.522s 00:20:27.394 00:57:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:27.394 00:57:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.394 ************************************ 00:20:27.394 END TEST nvmf_host_discovery 00:20:27.394 ************************************ 00:20:27.394 00:57:14 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:27.394 00:57:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:27.394 00:57:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:27.394 00:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.654 ************************************ 00:20:27.654 START TEST nvmf_host_multipath_status 00:20:27.654 ************************************ 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:27.654 * Looking for test storage... 00:20:27.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:20:27.654 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.655 00:57:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.559 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:29.560 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:29.560 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:29.560 Found net devices under 0000:08:00.0: cvl_0_0 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:29.560 Found net devices under 0000:08:00.1: cvl_0_1 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:29.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:20:29.560 00:20:29.560 --- 10.0.0.2 ping statistics --- 00:20:29.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.560 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:20:29.560 00:20:29.560 --- 10.0.0.1 ping statistics --- 00:20:29.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.560 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=4061303 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 4061303 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 4061303 ']' 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:29.560 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:29.560 [2024-05-15 00:57:16.369572] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:20:29.560 [2024-05-15 00:57:16.369665] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.560 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.560 [2024-05-15 00:57:16.433625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:29.560 [2024-05-15 00:57:16.549307] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.560 [2024-05-15 00:57:16.549371] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.560 [2024-05-15 00:57:16.549388] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.560 [2024-05-15 00:57:16.549402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.560 [2024-05-15 00:57:16.549413] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.560 [2024-05-15 00:57:16.549510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.560 [2024-05-15 00:57:16.549517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.818 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:29.818 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:20:29.818 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.818 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.818 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:29.818 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.818 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4061303 00:20:29.819 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:30.077 [2024-05-15 00:57:16.948458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.077 00:57:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:30.335 Malloc0 00:20:30.335 00:57:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:30.593 00:57:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:30.850 00:57:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:31.109 [2024-05-15 00:57:17.988085] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:31.109 [2024-05-15 00:57:17.988371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.109 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:31.375 [2024-05-15 00:57:18.229068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4061524 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4061524 /var/tmp/bdevperf.sock 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 4061524 ']' 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:31.376 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:31.634 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:31.634 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:20:31.634 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:31.892 00:57:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:32.459 Nvme0n1 00:20:32.459 00:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:32.717 Nvme0n1 00:20:32.717 00:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:32.717 00:57:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:35.246 00:57:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:35.246 00:57:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:20:35.246 00:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:35.504 00:57:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:36.438 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:36.438 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:36.438 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.438 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:36.695 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.695 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:36.695 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.695 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:36.954 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:36.954 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:36.954 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.954 00:57:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:37.214 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:37.214 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:37.214 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:37.214 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:37.472 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:37.472 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:37.472 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:37.472 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:38.072 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:38.072 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:38.072 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:38.072 00:57:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:38.336 00:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:38.336 00:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:38.336 00:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:38.599 00:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:38.857 00:57:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:39.791 00:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:39.791 00:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:39.791 00:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.791 00:57:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:40.049 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:40.049 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:40.049 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:40.049 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:40.307 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:40.307 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:40.307 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:40.307 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:40.565 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:40.565 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:40.565 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:40.565 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:41.131 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:41.131 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:41.131 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:41.131 00:57:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:41.389 00:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:41.389 00:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:41.389 00:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:41.389 00:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:41.647 00:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:41.647 00:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:41.647 00:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:41.905 00:57:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:20:42.163 00:57:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:43.098 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:43.098 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:43.098 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.098 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:43.357 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:43.357 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:43.357 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.357 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:43.923 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:43.923 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:43.923 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.923 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:44.182 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:44.182 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:44.182 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:44.182 00:57:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:44.441 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:44.441 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:44.441 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:44.441 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:44.699 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:44.699 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:44.699 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:44.699 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:44.957 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:44.957 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:44.957 00:57:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:45.216 00:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:45.474 00:57:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:46.850 00:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:46.850 00:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:46.850 00:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.850 00:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:46.850 00:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.850 00:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:46.850 00:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.850 00:57:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:47.109 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:47.109 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:47.109 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.109 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:47.368 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:47.368 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:47.368 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.368 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:47.627 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:47.627 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:47.627 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.627 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:47.886 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:47.886 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:47.886 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.886 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:48.145 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:48.145 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:48.145 00:57:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:48.404 00:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:48.663 00:57:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:49.599 00:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:49.599 00:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:49.599 00:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:49.599 00:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:49.857 00:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:49.857 00:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:49.857 00:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:49.857 00:57:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:50.116 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:50.116 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:50.116 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:50.116 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:50.375 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:50.375 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:50.375 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:50.375 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:50.633 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:50.633 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:50.633 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:50.633 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:51.201 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:51.201 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:51.201 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.201 00:57:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:51.201 00:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:51.201 00:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:51.201 00:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:51.460 00:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:51.718 00:57:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:53.090 00:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:53.090 00:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:53.090 00:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.090 00:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:53.090 00:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:53.090 00:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:53.090 00:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.090 00:57:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:53.346 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:53.346 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:53.346 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.346 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:53.603 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:53.603 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:53.603 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.603 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:53.861 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:53.861 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:53.861 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:53.861 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:54.120 00:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:54.120 00:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:54.120 00:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.120 00:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:54.377 00:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:54.377 00:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:54.942 00:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:54.942 00:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:20:54.942 00:57:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:55.199 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:56.569 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:56.569 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:56.569 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:56.569 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:56.569 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:56.569 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:56.569 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:56.569 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:56.825 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:56.825 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:56.825 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:56.825 00:57:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:57.082 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.082 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:57.082 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.082 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:57.648 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.648 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:57.648 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.648 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:57.907 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.907 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:57.907 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.907 00:57:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:58.165 00:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:58.165 00:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:58.165 00:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:58.423 00:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:58.682 00:57:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:59.702 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:59.702 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:59.702 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.702 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:59.963 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:59.963 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:59.963 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.963 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:00.221 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.221 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:00.221 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.221 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:00.480 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.480 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:00.480 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.480 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:00.737 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.738 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:00.995 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.995 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:01.254 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:01.254 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:01.254 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:01.254 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:01.512 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:01.512 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:01.512 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:01.770 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:02.029 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:02.972 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:02.972 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:02.972 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.972 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:03.230 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.230 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:03.230 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.230 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:03.488 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.488 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:03.488 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.488 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:04.055 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.055 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:04.055 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.055 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:04.313 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.313 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:04.313 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.313 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:04.571 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.571 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:04.571 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.571 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:04.828 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.828 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:04.828 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:05.088 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:05.346 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:06.280 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:06.280 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:06.280 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.280 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:06.846 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:06.846 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:06.846 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.846 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:06.846 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:06.846 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:06.846 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:06.846 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:07.104 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.104 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:07.104 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.104 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:07.363 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.363 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:07.363 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.363 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:07.621 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.621 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:07.621 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.621 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4061524 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 4061524 ']' 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 4061524 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4061524 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4061524' 00:21:07.880 killing process with pid 4061524 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 4061524 00:21:07.880 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 4061524 00:21:08.142 Connection closed with partial response: 00:21:08.142 00:21:08.142 00:21:08.142 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4061524 00:21:08.142 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:08.142 [2024-05-15 00:57:18.290975] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:08.142 [2024-05-15 00:57:18.291086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061524 ] 00:21:08.142 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.142 [2024-05-15 00:57:18.344811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.142 [2024-05-15 00:57:18.461776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.142 Running I/O for 90 seconds... 00:21:08.142 [2024-05-15 00:57:35.218261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.218323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.218884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.218911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.218949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.218970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.218996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.219928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.219975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:08.142 [2024-05-15 00:57:35.220457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.142 [2024-05-15 00:57:35.220474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.220973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.220991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.221943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.221977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.143 [2024-05-15 00:57:35.222557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.143 [2024-05-15 00:57:35.222604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.143 [2024-05-15 00:57:35.222652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.143 [2024-05-15 00:57:35.222700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.143 [2024-05-15 00:57:35.222748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.222949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.222986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.223004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.223034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.223052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.223081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.223099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.223128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.223146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.223175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.223192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.223221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.223239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.223268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.223286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.223315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.143 [2024-05-15 00:57:35.223333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:08.143 [2024-05-15 00:57:35.223362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.223962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.223997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.224016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.224068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.224115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.224161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.224206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:35.224906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:35.224944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:35.224964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.300161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.300243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.300720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.300748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.300786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.300807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.300833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.300851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.300876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.300909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.300942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.300962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.300986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.301004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.301045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.301087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.301127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.301563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.301607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.301648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.301689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.301729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.301771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.301812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.301864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.301907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.301964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.301990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.302008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.302032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.144 [2024-05-15 00:57:52.302050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.302074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.302092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.302117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.144 [2024-05-15 00:57:52.302135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:08.144 [2024-05-15 00:57:52.302159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.302436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.302954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.302979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.302997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.303022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.303039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.303064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.303082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.305842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.305873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.305905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.305925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.305960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.145 [2024-05-15 00:57:52.305979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:08.145 [2024-05-15 00:57:52.306544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.145 [2024-05-15 00:57:52.306563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:08.145 Received shutdown signal, test time was about 34.954469 seconds 00:21:08.145 00:21:08.145 Latency(us) 00:21:08.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.145 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:08.145 Verification LBA range: start 0x0 length 0x4000 00:21:08.145 Nvme0n1 : 34.95 6989.17 27.30 0.00 0.00 18278.82 910.22 4026531.84 00:21:08.145 =================================================================================================================== 00:21:08.145 Total : 6989.17 27.30 0.00 0.00 18278.82 910.22 4026531.84 00:21:08.145 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:08.403 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:08.403 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:08.403 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:08.403 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:08.403 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:21:08.403 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:08.403 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:21:08.403 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.403 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:08.403 rmmod nvme_tcp 00:21:08.403 rmmod nvme_fabrics 00:21:08.403 rmmod nvme_keyring 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 4061303 ']' 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 4061303 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 4061303 ']' 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 4061303 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4061303 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4061303' 00:21:08.661 killing process with pid 4061303 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 4061303 00:21:08.661 [2024-05-15 00:57:55.501784] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:08.661 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 4061303 00:21:08.935 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.935 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:08.935 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:08.935 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.935 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.935 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.935 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.935 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.840 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.840 00:21:10.840 real 0m43.325s 00:21:10.840 user 2m8.733s 00:21:10.840 sys 0m12.392s 00:21:10.840 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:10.840 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:10.840 ************************************ 00:21:10.840 END TEST nvmf_host_multipath_status 00:21:10.840 ************************************ 00:21:10.840 00:57:57 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:10.840 00:57:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:10.840 00:57:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:10.840 00:57:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.840 ************************************ 00:21:10.840 START TEST nvmf_discovery_remove_ifc 00:21:10.840 ************************************ 00:21:10.840 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:10.840 * Looking for test storage... 00:21:10.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.840 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.840 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:10.840 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.840 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.840 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.840 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.840 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.100 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:21:11.101 00:57:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.480 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:12.481 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:12.481 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:12.481 Found net devices under 0000:08:00.0: cvl_0_0 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:12.481 Found net devices under 0000:08:00.1: cvl_0_1 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.481 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:21:12.740 00:21:12.740 --- 10.0.0.2 ping statistics --- 00:21:12.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.740 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:21:12.740 00:21:12.740 --- 10.0.0.1 ping statistics --- 00:21:12.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.740 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=4066605 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 4066605 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 4066605 ']' 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.740 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:12.741 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.741 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:12.741 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:12.741 [2024-05-15 00:57:59.650387] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:12.741 [2024-05-15 00:57:59.650476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.741 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.741 [2024-05-15 00:57:59.713593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.000 [2024-05-15 00:57:59.829096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.000 [2024-05-15 00:57:59.829161] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.000 [2024-05-15 00:57:59.829183] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.000 [2024-05-15 00:57:59.829197] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.000 [2024-05-15 00:57:59.829209] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.000 [2024-05-15 00:57:59.829239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.000 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:13.000 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:21:13.000 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.000 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.000 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:13.000 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.000 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:13.000 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.000 00:57:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:13.000 [2024-05-15 00:57:59.973358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.000 [2024-05-15 00:57:59.981309] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:13.000 [2024-05-15 00:57:59.981564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:13.000 null0 00:21:13.000 [2024-05-15 00:58:00.013514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4066625 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4066625 /tmp/host.sock 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 4066625 ']' 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:13.000 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:13.000 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:13.258 [2024-05-15 00:58:00.082675] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:21:13.258 [2024-05-15 00:58:00.082762] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066625 ] 00:21:13.258 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.258 [2024-05-15 00:58:00.141634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.258 [2024-05-15 00:58:00.260014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.517 00:58:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:14.451 [2024-05-15 00:58:01.493019] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:14.451 [2024-05-15 00:58:01.493064] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:14.451 [2024-05-15 00:58:01.493092] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:14.709 [2024-05-15 00:58:01.619516] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:14.709 [2024-05-15 00:58:01.682000] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:14.709 [2024-05-15 00:58:01.682066] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:14.709 [2024-05-15 00:58:01.682109] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:14.709 [2024-05-15 00:58:01.682138] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:14.709 [2024-05-15 00:58:01.682177] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:14.709 [2024-05-15 00:58:01.689905] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe73560 was disconnected and freed. delete nvme_qpair. 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:21:14.709 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:14.967 00:58:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:15.900 00:58:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:16.834 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:16.834 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.834 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.834 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:16.834 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:16.834 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:16.834 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:16.834 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.092 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:17.092 00:58:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:18.036 00:58:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:18.970 00:58:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:20.344 00:58:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:20.344 00:58:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:20.344 00:58:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:20.344 00:58:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.344 00:58:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:20.344 00:58:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:20.344 00:58:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:20.344 00:58:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.344 00:58:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:20.344 00:58:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:20.344 [2024-05-15 00:58:07.123091] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:20.344 [2024-05-15 00:58:07.123165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.344 [2024-05-15 00:58:07.123189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.344 [2024-05-15 00:58:07.123209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.344 [2024-05-15 00:58:07.123224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.344 [2024-05-15 00:58:07.123239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.344 [2024-05-15 00:58:07.123253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.344 [2024-05-15 00:58:07.123269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.344 [2024-05-15 00:58:07.123284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.344 [2024-05-15 00:58:07.123299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.344 [2024-05-15 00:58:07.123314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.344 [2024-05-15 00:58:07.123330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3a980 is same with the state(5) to be set 00:21:20.344 [2024-05-15 00:58:07.133104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3a980 (9): Bad file descriptor 00:21:20.344 [2024-05-15 00:58:07.143154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:21.278 00:58:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:21.278 00:58:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.278 00:58:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.278 00:58:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:21.278 00:58:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:21.278 00:58:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:21.278 00:58:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:21.278 [2024-05-15 00:58:08.209021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:22.213 [2024-05-15 00:58:09.232964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:22.213 [2024-05-15 00:58:09.233012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3a980 with addr=10.0.0.2, port=4420 00:21:22.213 [2024-05-15 00:58:09.233040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3a980 is same with the state(5) to be set 00:21:22.213 [2024-05-15 00:58:09.233534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3a980 (9): Bad file descriptor 00:21:22.213 [2024-05-15 00:58:09.233577] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.213 [2024-05-15 00:58:09.233615] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:22.213 [2024-05-15 00:58:09.233655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.213 [2024-05-15 00:58:09.233678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.213 [2024-05-15 00:58:09.233699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.213 [2024-05-15 00:58:09.233713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.213 [2024-05-15 00:58:09.233728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.213 [2024-05-15 00:58:09.233742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.213 [2024-05-15 00:58:09.233757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.213 [2024-05-15 00:58:09.233771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.213 [2024-05-15 00:58:09.233786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.213 [2024-05-15 00:58:09.233800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.213 [2024-05-15 00:58:09.233815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:22.213 [2024-05-15 00:58:09.234075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe39e10 (9): Bad file descriptor 00:21:22.213 [2024-05-15 00:58:09.235097] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:22.213 [2024-05-15 00:58:09.235123] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:22.213 00:58:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.213 00:58:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:22.213 00:58:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:23.586 00:58:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:24.519 [2024-05-15 00:58:11.247203] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:24.519 [2024-05-15 00:58:11.247241] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:24.519 [2024-05-15 00:58:11.247267] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:24.519 [2024-05-15 00:58:11.373663] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:24.519 00:58:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:24.519 [2024-05-15 00:58:11.477579] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:24.520 [2024-05-15 00:58:11.477634] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:24.520 [2024-05-15 00:58:11.477671] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:24.520 [2024-05-15 00:58:11.477696] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:24.520 [2024-05-15 00:58:11.477710] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:24.520 [2024-05-15 00:58:11.485887] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe53470 was disconnected and freed. delete nvme_qpair. 00:21:25.451 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:25.451 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4066625 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 4066625 ']' 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 4066625 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4066625 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4066625' 00:21:25.452 killing process with pid 4066625 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 4066625 00:21:25.452 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 4066625 00:21:25.709 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:25.709 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.709 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:21:25.709 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.709 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:21:25.709 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.709 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.709 rmmod nvme_tcp 00:21:25.709 rmmod nvme_fabrics 00:21:25.709 rmmod nvme_keyring 00:21:25.709 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 4066605 ']' 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 4066605 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 4066605 ']' 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 4066605 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4066605 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4066605' 00:21:25.969 killing process with pid 4066605 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 4066605 00:21:25.969 [2024-05-15 00:58:12.798286] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:25.969 00:58:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 4066605 00:21:25.969 00:58:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.969 00:58:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.969 00:58:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.969 00:58:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.969 00:58:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.969 00:58:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.969 00:58:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.969 00:58:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.504 00:58:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:28.504 00:21:28.504 real 0m17.218s 00:21:28.504 user 0m24.516s 00:21:28.504 sys 0m2.591s 00:21:28.504 00:58:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:28.504 00:58:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:28.504 ************************************ 00:21:28.504 END TEST nvmf_discovery_remove_ifc 00:21:28.504 ************************************ 00:21:28.504 00:58:15 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:28.504 00:58:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:28.504 00:58:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:28.504 00:58:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.504 ************************************ 00:21:28.504 START TEST nvmf_identify_kernel_target 00:21:28.504 ************************************ 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:28.504 * Looking for test storage... 00:21:28.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:28.504 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:28.505 00:58:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:29.882 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.882 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:29.882 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:29.883 Found net devices under 0000:08:00.0: cvl_0_0 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:29.883 Found net devices under 0000:08:00.1: cvl_0_1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:29.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:21:29.883 00:21:29.883 --- 10.0.0.2 ping statistics --- 00:21:29.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.883 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:29.883 00:21:29.883 --- 10.0.0.1 ping statistics --- 00:21:29.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.883 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:29.883 00:58:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:21:30.825 Waiting for block devices as requested 00:21:30.825 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:21:31.083 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:21:31.083 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:21:31.083 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:21:31.344 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:21:31.344 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:21:31.344 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:21:31.344 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:21:31.602 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:21:31.602 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:21:31.602 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:21:31.602 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:21:31.863 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:21:31.863 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:21:31.863 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:21:31.863 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:21:32.122 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:32.122 No valid GPT data, bailing 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:32.122 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:21:32.122 00:21:32.123 Discovery Log Number of Records 2, Generation counter 2 00:21:32.123 =====Discovery Log Entry 0====== 00:21:32.123 trtype: tcp 00:21:32.123 adrfam: ipv4 00:21:32.123 subtype: current discovery subsystem 00:21:32.123 treq: not specified, sq flow control disable supported 00:21:32.123 portid: 1 00:21:32.123 trsvcid: 4420 00:21:32.123 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:32.123 traddr: 10.0.0.1 00:21:32.123 eflags: none 00:21:32.123 sectype: none 00:21:32.123 =====Discovery Log Entry 1====== 00:21:32.123 trtype: tcp 00:21:32.123 adrfam: ipv4 00:21:32.123 subtype: nvme subsystem 00:21:32.123 treq: not specified, sq flow control disable supported 00:21:32.123 portid: 1 00:21:32.123 trsvcid: 4420 00:21:32.123 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:32.123 traddr: 10.0.0.1 00:21:32.123 eflags: none 00:21:32.123 sectype: none 00:21:32.123 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:32.123 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:32.123 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.383 ===================================================== 00:21:32.383 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:32.383 ===================================================== 00:21:32.383 Controller Capabilities/Features 00:21:32.383 ================================ 00:21:32.383 Vendor ID: 0000 00:21:32.383 Subsystem Vendor ID: 0000 00:21:32.383 Serial Number: 44779355272a737464f6 00:21:32.383 Model Number: Linux 00:21:32.383 Firmware Version: 6.7.0-68 00:21:32.383 Recommended Arb Burst: 0 00:21:32.383 IEEE OUI Identifier: 00 00 00 00:21:32.383 Multi-path I/O 00:21:32.383 May have multiple subsystem ports: No 00:21:32.383 May have multiple controllers: No 00:21:32.383 Associated with SR-IOV VF: No 00:21:32.383 Max Data Transfer Size: Unlimited 00:21:32.383 Max Number of Namespaces: 0 00:21:32.383 Max Number of I/O Queues: 1024 00:21:32.383 NVMe Specification Version (VS): 1.3 00:21:32.383 NVMe Specification Version (Identify): 1.3 00:21:32.383 Maximum Queue Entries: 1024 00:21:32.383 Contiguous Queues Required: No 00:21:32.383 Arbitration Mechanisms Supported 00:21:32.383 Weighted Round Robin: Not Supported 00:21:32.383 Vendor Specific: Not Supported 00:21:32.383 Reset Timeout: 7500 ms 00:21:32.383 Doorbell Stride: 4 bytes 00:21:32.383 NVM Subsystem Reset: Not Supported 00:21:32.383 Command Sets Supported 00:21:32.383 NVM Command Set: Supported 00:21:32.383 Boot Partition: Not Supported 00:21:32.383 Memory Page Size Minimum: 4096 bytes 00:21:32.383 Memory Page Size Maximum: 4096 bytes 00:21:32.383 Persistent Memory Region: Not Supported 00:21:32.383 Optional Asynchronous Events Supported 00:21:32.383 Namespace Attribute Notices: Not Supported 00:21:32.383 Firmware Activation Notices: Not Supported 00:21:32.383 ANA Change Notices: Not Supported 00:21:32.383 PLE Aggregate Log Change Notices: Not Supported 00:21:32.383 LBA Status Info Alert Notices: Not Supported 00:21:32.383 EGE Aggregate Log Change Notices: Not Supported 00:21:32.383 Normal NVM Subsystem Shutdown event: Not Supported 00:21:32.383 Zone Descriptor Change Notices: Not Supported 00:21:32.383 Discovery Log Change Notices: Supported 00:21:32.383 Controller Attributes 00:21:32.383 128-bit Host Identifier: Not Supported 00:21:32.383 Non-Operational Permissive Mode: Not Supported 00:21:32.383 NVM Sets: Not Supported 00:21:32.383 Read Recovery Levels: Not Supported 00:21:32.383 Endurance Groups: Not Supported 00:21:32.383 Predictable Latency Mode: Not Supported 00:21:32.383 Traffic Based Keep ALive: Not Supported 00:21:32.383 Namespace Granularity: Not Supported 00:21:32.383 SQ Associations: Not Supported 00:21:32.383 UUID List: Not Supported 00:21:32.383 Multi-Domain Subsystem: Not Supported 00:21:32.383 Fixed Capacity Management: Not Supported 00:21:32.383 Variable Capacity Management: Not Supported 00:21:32.383 Delete Endurance Group: Not Supported 00:21:32.383 Delete NVM Set: Not Supported 00:21:32.383 Extended LBA Formats Supported: Not Supported 00:21:32.383 Flexible Data Placement Supported: Not Supported 00:21:32.383 00:21:32.383 Controller Memory Buffer Support 00:21:32.383 ================================ 00:21:32.383 Supported: No 00:21:32.383 00:21:32.383 Persistent Memory Region Support 00:21:32.383 ================================ 00:21:32.383 Supported: No 00:21:32.383 00:21:32.383 Admin Command Set Attributes 00:21:32.383 ============================ 00:21:32.383 Security Send/Receive: Not Supported 00:21:32.383 Format NVM: Not Supported 00:21:32.383 Firmware Activate/Download: Not Supported 00:21:32.383 Namespace Management: Not Supported 00:21:32.383 Device Self-Test: Not Supported 00:21:32.383 Directives: Not Supported 00:21:32.383 NVMe-MI: Not Supported 00:21:32.383 Virtualization Management: Not Supported 00:21:32.383 Doorbell Buffer Config: Not Supported 00:21:32.383 Get LBA Status Capability: Not Supported 00:21:32.383 Command & Feature Lockdown Capability: Not Supported 00:21:32.383 Abort Command Limit: 1 00:21:32.383 Async Event Request Limit: 1 00:21:32.383 Number of Firmware Slots: N/A 00:21:32.383 Firmware Slot 1 Read-Only: N/A 00:21:32.383 Firmware Activation Without Reset: N/A 00:21:32.383 Multiple Update Detection Support: N/A 00:21:32.383 Firmware Update Granularity: No Information Provided 00:21:32.383 Per-Namespace SMART Log: No 00:21:32.383 Asymmetric Namespace Access Log Page: Not Supported 00:21:32.383 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:32.383 Command Effects Log Page: Not Supported 00:21:32.383 Get Log Page Extended Data: Supported 00:21:32.383 Telemetry Log Pages: Not Supported 00:21:32.383 Persistent Event Log Pages: Not Supported 00:21:32.383 Supported Log Pages Log Page: May Support 00:21:32.383 Commands Supported & Effects Log Page: Not Supported 00:21:32.383 Feature Identifiers & Effects Log Page:May Support 00:21:32.383 NVMe-MI Commands & Effects Log Page: May Support 00:21:32.383 Data Area 4 for Telemetry Log: Not Supported 00:21:32.383 Error Log Page Entries Supported: 1 00:21:32.383 Keep Alive: Not Supported 00:21:32.383 00:21:32.383 NVM Command Set Attributes 00:21:32.383 ========================== 00:21:32.383 Submission Queue Entry Size 00:21:32.383 Max: 1 00:21:32.383 Min: 1 00:21:32.383 Completion Queue Entry Size 00:21:32.383 Max: 1 00:21:32.383 Min: 1 00:21:32.383 Number of Namespaces: 0 00:21:32.383 Compare Command: Not Supported 00:21:32.383 Write Uncorrectable Command: Not Supported 00:21:32.383 Dataset Management Command: Not Supported 00:21:32.383 Write Zeroes Command: Not Supported 00:21:32.383 Set Features Save Field: Not Supported 00:21:32.383 Reservations: Not Supported 00:21:32.383 Timestamp: Not Supported 00:21:32.383 Copy: Not Supported 00:21:32.383 Volatile Write Cache: Not Present 00:21:32.383 Atomic Write Unit (Normal): 1 00:21:32.383 Atomic Write Unit (PFail): 1 00:21:32.383 Atomic Compare & Write Unit: 1 00:21:32.383 Fused Compare & Write: Not Supported 00:21:32.383 Scatter-Gather List 00:21:32.383 SGL Command Set: Supported 00:21:32.383 SGL Keyed: Not Supported 00:21:32.383 SGL Bit Bucket Descriptor: Not Supported 00:21:32.383 SGL Metadata Pointer: Not Supported 00:21:32.383 Oversized SGL: Not Supported 00:21:32.383 SGL Metadata Address: Not Supported 00:21:32.383 SGL Offset: Supported 00:21:32.383 Transport SGL Data Block: Not Supported 00:21:32.383 Replay Protected Memory Block: Not Supported 00:21:32.383 00:21:32.383 Firmware Slot Information 00:21:32.383 ========================= 00:21:32.383 Active slot: 0 00:21:32.383 00:21:32.383 00:21:32.383 Error Log 00:21:32.383 ========= 00:21:32.383 00:21:32.383 Active Namespaces 00:21:32.383 ================= 00:21:32.383 Discovery Log Page 00:21:32.383 ================== 00:21:32.383 Generation Counter: 2 00:21:32.383 Number of Records: 2 00:21:32.383 Record Format: 0 00:21:32.383 00:21:32.383 Discovery Log Entry 0 00:21:32.383 ---------------------- 00:21:32.383 Transport Type: 3 (TCP) 00:21:32.383 Address Family: 1 (IPv4) 00:21:32.383 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:32.384 Entry Flags: 00:21:32.384 Duplicate Returned Information: 0 00:21:32.384 Explicit Persistent Connection Support for Discovery: 0 00:21:32.384 Transport Requirements: 00:21:32.384 Secure Channel: Not Specified 00:21:32.384 Port ID: 1 (0x0001) 00:21:32.384 Controller ID: 65535 (0xffff) 00:21:32.384 Admin Max SQ Size: 32 00:21:32.384 Transport Service Identifier: 4420 00:21:32.384 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:32.384 Transport Address: 10.0.0.1 00:21:32.384 Discovery Log Entry 1 00:21:32.384 ---------------------- 00:21:32.384 Transport Type: 3 (TCP) 00:21:32.384 Address Family: 1 (IPv4) 00:21:32.384 Subsystem Type: 2 (NVM Subsystem) 00:21:32.384 Entry Flags: 00:21:32.384 Duplicate Returned Information: 0 00:21:32.384 Explicit Persistent Connection Support for Discovery: 0 00:21:32.384 Transport Requirements: 00:21:32.384 Secure Channel: Not Specified 00:21:32.384 Port ID: 1 (0x0001) 00:21:32.384 Controller ID: 65535 (0xffff) 00:21:32.384 Admin Max SQ Size: 32 00:21:32.384 Transport Service Identifier: 4420 00:21:32.384 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:32.384 Transport Address: 10.0.0.1 00:21:32.384 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:32.384 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.384 get_feature(0x01) failed 00:21:32.384 get_feature(0x02) failed 00:21:32.384 get_feature(0x04) failed 00:21:32.384 ===================================================== 00:21:32.384 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:32.384 ===================================================== 00:21:32.384 Controller Capabilities/Features 00:21:32.384 ================================ 00:21:32.384 Vendor ID: 0000 00:21:32.384 Subsystem Vendor ID: 0000 00:21:32.384 Serial Number: 87c1e4ea68d518e52c58 00:21:32.384 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:32.384 Firmware Version: 6.7.0-68 00:21:32.384 Recommended Arb Burst: 6 00:21:32.384 IEEE OUI Identifier: 00 00 00 00:21:32.384 Multi-path I/O 00:21:32.384 May have multiple subsystem ports: Yes 00:21:32.384 May have multiple controllers: Yes 00:21:32.384 Associated with SR-IOV VF: No 00:21:32.384 Max Data Transfer Size: Unlimited 00:21:32.384 Max Number of Namespaces: 1024 00:21:32.384 Max Number of I/O Queues: 128 00:21:32.384 NVMe Specification Version (VS): 1.3 00:21:32.384 NVMe Specification Version (Identify): 1.3 00:21:32.384 Maximum Queue Entries: 1024 00:21:32.384 Contiguous Queues Required: No 00:21:32.384 Arbitration Mechanisms Supported 00:21:32.384 Weighted Round Robin: Not Supported 00:21:32.384 Vendor Specific: Not Supported 00:21:32.384 Reset Timeout: 7500 ms 00:21:32.384 Doorbell Stride: 4 bytes 00:21:32.384 NVM Subsystem Reset: Not Supported 00:21:32.384 Command Sets Supported 00:21:32.384 NVM Command Set: Supported 00:21:32.384 Boot Partition: Not Supported 00:21:32.384 Memory Page Size Minimum: 4096 bytes 00:21:32.384 Memory Page Size Maximum: 4096 bytes 00:21:32.384 Persistent Memory Region: Not Supported 00:21:32.384 Optional Asynchronous Events Supported 00:21:32.384 Namespace Attribute Notices: Supported 00:21:32.384 Firmware Activation Notices: Not Supported 00:21:32.384 ANA Change Notices: Supported 00:21:32.384 PLE Aggregate Log Change Notices: Not Supported 00:21:32.384 LBA Status Info Alert Notices: Not Supported 00:21:32.384 EGE Aggregate Log Change Notices: Not Supported 00:21:32.384 Normal NVM Subsystem Shutdown event: Not Supported 00:21:32.384 Zone Descriptor Change Notices: Not Supported 00:21:32.384 Discovery Log Change Notices: Not Supported 00:21:32.384 Controller Attributes 00:21:32.384 128-bit Host Identifier: Supported 00:21:32.384 Non-Operational Permissive Mode: Not Supported 00:21:32.384 NVM Sets: Not Supported 00:21:32.384 Read Recovery Levels: Not Supported 00:21:32.384 Endurance Groups: Not Supported 00:21:32.384 Predictable Latency Mode: Not Supported 00:21:32.384 Traffic Based Keep ALive: Supported 00:21:32.384 Namespace Granularity: Not Supported 00:21:32.384 SQ Associations: Not Supported 00:21:32.384 UUID List: Not Supported 00:21:32.384 Multi-Domain Subsystem: Not Supported 00:21:32.384 Fixed Capacity Management: Not Supported 00:21:32.384 Variable Capacity Management: Not Supported 00:21:32.384 Delete Endurance Group: Not Supported 00:21:32.384 Delete NVM Set: Not Supported 00:21:32.384 Extended LBA Formats Supported: Not Supported 00:21:32.384 Flexible Data Placement Supported: Not Supported 00:21:32.384 00:21:32.384 Controller Memory Buffer Support 00:21:32.384 ================================ 00:21:32.384 Supported: No 00:21:32.384 00:21:32.384 Persistent Memory Region Support 00:21:32.384 ================================ 00:21:32.384 Supported: No 00:21:32.384 00:21:32.384 Admin Command Set Attributes 00:21:32.384 ============================ 00:21:32.384 Security Send/Receive: Not Supported 00:21:32.384 Format NVM: Not Supported 00:21:32.384 Firmware Activate/Download: Not Supported 00:21:32.384 Namespace Management: Not Supported 00:21:32.384 Device Self-Test: Not Supported 00:21:32.384 Directives: Not Supported 00:21:32.384 NVMe-MI: Not Supported 00:21:32.384 Virtualization Management: Not Supported 00:21:32.384 Doorbell Buffer Config: Not Supported 00:21:32.384 Get LBA Status Capability: Not Supported 00:21:32.384 Command & Feature Lockdown Capability: Not Supported 00:21:32.384 Abort Command Limit: 4 00:21:32.384 Async Event Request Limit: 4 00:21:32.384 Number of Firmware Slots: N/A 00:21:32.384 Firmware Slot 1 Read-Only: N/A 00:21:32.384 Firmware Activation Without Reset: N/A 00:21:32.384 Multiple Update Detection Support: N/A 00:21:32.384 Firmware Update Granularity: No Information Provided 00:21:32.384 Per-Namespace SMART Log: Yes 00:21:32.384 Asymmetric Namespace Access Log Page: Supported 00:21:32.384 ANA Transition Time : 10 sec 00:21:32.384 00:21:32.384 Asymmetric Namespace Access Capabilities 00:21:32.384 ANA Optimized State : Supported 00:21:32.384 ANA Non-Optimized State : Supported 00:21:32.384 ANA Inaccessible State : Supported 00:21:32.384 ANA Persistent Loss State : Supported 00:21:32.384 ANA Change State : Supported 00:21:32.384 ANAGRPID is not changed : No 00:21:32.384 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:32.384 00:21:32.384 ANA Group Identifier Maximum : 128 00:21:32.384 Number of ANA Group Identifiers : 128 00:21:32.384 Max Number of Allowed Namespaces : 1024 00:21:32.384 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:32.384 Command Effects Log Page: Supported 00:21:32.384 Get Log Page Extended Data: Supported 00:21:32.384 Telemetry Log Pages: Not Supported 00:21:32.384 Persistent Event Log Pages: Not Supported 00:21:32.384 Supported Log Pages Log Page: May Support 00:21:32.384 Commands Supported & Effects Log Page: Not Supported 00:21:32.384 Feature Identifiers & Effects Log Page:May Support 00:21:32.384 NVMe-MI Commands & Effects Log Page: May Support 00:21:32.384 Data Area 4 for Telemetry Log: Not Supported 00:21:32.384 Error Log Page Entries Supported: 128 00:21:32.384 Keep Alive: Supported 00:21:32.384 Keep Alive Granularity: 1000 ms 00:21:32.384 00:21:32.384 NVM Command Set Attributes 00:21:32.384 ========================== 00:21:32.384 Submission Queue Entry Size 00:21:32.384 Max: 64 00:21:32.384 Min: 64 00:21:32.384 Completion Queue Entry Size 00:21:32.384 Max: 16 00:21:32.384 Min: 16 00:21:32.384 Number of Namespaces: 1024 00:21:32.384 Compare Command: Not Supported 00:21:32.384 Write Uncorrectable Command: Not Supported 00:21:32.384 Dataset Management Command: Supported 00:21:32.384 Write Zeroes Command: Supported 00:21:32.384 Set Features Save Field: Not Supported 00:21:32.384 Reservations: Not Supported 00:21:32.384 Timestamp: Not Supported 00:21:32.384 Copy: Not Supported 00:21:32.384 Volatile Write Cache: Present 00:21:32.384 Atomic Write Unit (Normal): 1 00:21:32.384 Atomic Write Unit (PFail): 1 00:21:32.384 Atomic Compare & Write Unit: 1 00:21:32.384 Fused Compare & Write: Not Supported 00:21:32.384 Scatter-Gather List 00:21:32.384 SGL Command Set: Supported 00:21:32.384 SGL Keyed: Not Supported 00:21:32.384 SGL Bit Bucket Descriptor: Not Supported 00:21:32.384 SGL Metadata Pointer: Not Supported 00:21:32.384 Oversized SGL: Not Supported 00:21:32.384 SGL Metadata Address: Not Supported 00:21:32.384 SGL Offset: Supported 00:21:32.384 Transport SGL Data Block: Not Supported 00:21:32.384 Replay Protected Memory Block: Not Supported 00:21:32.384 00:21:32.384 Firmware Slot Information 00:21:32.384 ========================= 00:21:32.385 Active slot: 0 00:21:32.385 00:21:32.385 Asymmetric Namespace Access 00:21:32.385 =========================== 00:21:32.385 Change Count : 0 00:21:32.385 Number of ANA Group Descriptors : 1 00:21:32.385 ANA Group Descriptor : 0 00:21:32.385 ANA Group ID : 1 00:21:32.385 Number of NSID Values : 1 00:21:32.385 Change Count : 0 00:21:32.385 ANA State : 1 00:21:32.385 Namespace Identifier : 1 00:21:32.385 00:21:32.385 Commands Supported and Effects 00:21:32.385 ============================== 00:21:32.385 Admin Commands 00:21:32.385 -------------- 00:21:32.385 Get Log Page (02h): Supported 00:21:32.385 Identify (06h): Supported 00:21:32.385 Abort (08h): Supported 00:21:32.385 Set Features (09h): Supported 00:21:32.385 Get Features (0Ah): Supported 00:21:32.385 Asynchronous Event Request (0Ch): Supported 00:21:32.385 Keep Alive (18h): Supported 00:21:32.385 I/O Commands 00:21:32.385 ------------ 00:21:32.385 Flush (00h): Supported 00:21:32.385 Write (01h): Supported LBA-Change 00:21:32.385 Read (02h): Supported 00:21:32.385 Write Zeroes (08h): Supported LBA-Change 00:21:32.385 Dataset Management (09h): Supported 00:21:32.385 00:21:32.385 Error Log 00:21:32.385 ========= 00:21:32.385 Entry: 0 00:21:32.385 Error Count: 0x3 00:21:32.385 Submission Queue Id: 0x0 00:21:32.385 Command Id: 0x5 00:21:32.385 Phase Bit: 0 00:21:32.385 Status Code: 0x2 00:21:32.385 Status Code Type: 0x0 00:21:32.385 Do Not Retry: 1 00:21:32.385 Error Location: 0x28 00:21:32.385 LBA: 0x0 00:21:32.385 Namespace: 0x0 00:21:32.385 Vendor Log Page: 0x0 00:21:32.385 ----------- 00:21:32.385 Entry: 1 00:21:32.385 Error Count: 0x2 00:21:32.385 Submission Queue Id: 0x0 00:21:32.385 Command Id: 0x5 00:21:32.385 Phase Bit: 0 00:21:32.385 Status Code: 0x2 00:21:32.385 Status Code Type: 0x0 00:21:32.385 Do Not Retry: 1 00:21:32.385 Error Location: 0x28 00:21:32.385 LBA: 0x0 00:21:32.385 Namespace: 0x0 00:21:32.385 Vendor Log Page: 0x0 00:21:32.385 ----------- 00:21:32.385 Entry: 2 00:21:32.385 Error Count: 0x1 00:21:32.385 Submission Queue Id: 0x0 00:21:32.385 Command Id: 0x4 00:21:32.385 Phase Bit: 0 00:21:32.385 Status Code: 0x2 00:21:32.385 Status Code Type: 0x0 00:21:32.385 Do Not Retry: 1 00:21:32.385 Error Location: 0x28 00:21:32.385 LBA: 0x0 00:21:32.385 Namespace: 0x0 00:21:32.385 Vendor Log Page: 0x0 00:21:32.385 00:21:32.385 Number of Queues 00:21:32.385 ================ 00:21:32.385 Number of I/O Submission Queues: 128 00:21:32.385 Number of I/O Completion Queues: 128 00:21:32.385 00:21:32.385 ZNS Specific Controller Data 00:21:32.385 ============================ 00:21:32.385 Zone Append Size Limit: 0 00:21:32.385 00:21:32.385 00:21:32.385 Active Namespaces 00:21:32.385 ================= 00:21:32.385 get_feature(0x05) failed 00:21:32.385 Namespace ID:1 00:21:32.385 Command Set Identifier: NVM (00h) 00:21:32.385 Deallocate: Supported 00:21:32.385 Deallocated/Unwritten Error: Not Supported 00:21:32.385 Deallocated Read Value: Unknown 00:21:32.385 Deallocate in Write Zeroes: Not Supported 00:21:32.385 Deallocated Guard Field: 0xFFFF 00:21:32.385 Flush: Supported 00:21:32.385 Reservation: Not Supported 00:21:32.385 Namespace Sharing Capabilities: Multiple Controllers 00:21:32.385 Size (in LBAs): 1953525168 (931GiB) 00:21:32.385 Capacity (in LBAs): 1953525168 (931GiB) 00:21:32.385 Utilization (in LBAs): 1953525168 (931GiB) 00:21:32.385 UUID: ed043788-a7c1-454e-9384-43d6421ae253 00:21:32.385 Thin Provisioning: Not Supported 00:21:32.385 Per-NS Atomic Units: Yes 00:21:32.385 Atomic Boundary Size (Normal): 0 00:21:32.385 Atomic Boundary Size (PFail): 0 00:21:32.385 Atomic Boundary Offset: 0 00:21:32.385 NGUID/EUI64 Never Reused: No 00:21:32.385 ANA group ID: 1 00:21:32.385 Namespace Write Protected: No 00:21:32.385 Number of LBA Formats: 1 00:21:32.385 Current LBA Format: LBA Format #00 00:21:32.385 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:32.385 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:32.385 rmmod nvme_tcp 00:21:32.385 rmmod nvme_fabrics 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.385 00:58:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:34.345 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:34.619 00:58:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:35.560 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:21:35.560 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:21:35.560 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:21:35.560 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:21:35.560 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:21:35.560 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:21:35.560 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:21:35.560 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:21:35.560 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:21:35.560 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:21:35.560 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:21:35.560 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:21:35.560 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:21:35.560 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:21:35.560 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:21:35.560 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:21:36.495 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:21:36.495 00:21:36.495 real 0m8.397s 00:21:36.495 user 0m1.729s 00:21:36.495 sys 0m2.816s 00:21:36.495 00:58:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:36.495 00:58:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.495 ************************************ 00:21:36.495 END TEST nvmf_identify_kernel_target 00:21:36.495 ************************************ 00:21:36.495 00:58:23 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:36.495 00:58:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:36.495 00:58:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:36.495 00:58:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:36.753 ************************************ 00:21:36.753 START TEST nvmf_auth 00:21:36.753 ************************************ 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:36.753 * Looking for test storage... 00:21:36.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.753 00:58:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:38.130 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:38.130 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:38.130 Found net devices under 0000:08:00.0: cvl_0_0 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:38.130 Found net devices under 0000:08:00.1: cvl_0_1 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.130 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:38.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:21:38.389 00:21:38.389 --- 10.0.0.2 ping statistics --- 00:21:38.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.389 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:21:38.389 00:21:38.389 --- 10.0.0.1 ping statistics --- 00:21:38.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.389 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=4072046 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 4072046 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 4072046 ']' 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:38.389 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=495cae5cd3b2e5e9f4fa59ff9c34527a 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.at8 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 495cae5cd3b2e5e9f4fa59ff9c34527a 0 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 495cae5cd3b2e5e9f4fa59ff9c34527a 0 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=495cae5cd3b2e5e9f4fa59ff9c34527a 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:21:38.647 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.at8 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.at8 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.at8 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=9a00977e39e5b3ef684f4ba397cb411fcd8ed44e24b88b9dc22ccafe12c76de5 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.tEQ 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 9a00977e39e5b3ef684f4ba397cb411fcd8ed44e24b88b9dc22ccafe12c76de5 3 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 9a00977e39e5b3ef684f4ba397cb411fcd8ed44e24b88b9dc22ccafe12c76de5 3 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=9a00977e39e5b3ef684f4ba397cb411fcd8ed44e24b88b9dc22ccafe12c76de5 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.tEQ 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.tEQ 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.tEQ 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=3f75b4c19cdeb61b52360ed45aa660d24df08065cddd320f 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.u3o 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 3f75b4c19cdeb61b52360ed45aa660d24df08065cddd320f 0 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 3f75b4c19cdeb61b52360ed45aa660d24df08065cddd320f 0 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=3f75b4c19cdeb61b52360ed45aa660d24df08065cddd320f 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.u3o 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.u3o 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.u3o 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=7f3025acd671810f64dfaf07efb57deb31cf7b3db8d9643e 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.IXi 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 7f3025acd671810f64dfaf07efb57deb31cf7b3db8d9643e 2 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 7f3025acd671810f64dfaf07efb57deb31cf7b3db8d9643e 2 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=7f3025acd671810f64dfaf07efb57deb31cf7b3db8d9643e 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.IXi 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.IXi 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.IXi 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=350773e60a723d060dfb03b42b6192d9 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.J9y 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 350773e60a723d060dfb03b42b6192d9 1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 350773e60a723d060dfb03b42b6192d9 1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=350773e60a723d060dfb03b42b6192d9 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.J9y 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.J9y 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.J9y 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=90be6bbd524c4b0b715b2e1d37484861 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.HzU 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 90be6bbd524c4b0b715b2e1d37484861 1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 90be6bbd524c4b0b715b2e1d37484861 1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=90be6bbd524c4b0b715b2e1d37484861 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:21:38.906 00:58:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.HzU 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.HzU 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.HzU 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:39.163 00:58:25 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=327589ee2d6254f207d0c22eeaebacc2b0f36160ab04b672 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.V8p 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 327589ee2d6254f207d0c22eeaebacc2b0f36160ab04b672 2 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 327589ee2d6254f207d0c22eeaebacc2b0f36160ab04b672 2 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=327589ee2d6254f207d0c22eeaebacc2b0f36160ab04b672 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.V8p 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.V8p 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.V8p 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=87d76dd5f1452b18226a5f19fb9565b9 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.urY 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 87d76dd5f1452b18226a5f19fb9565b9 0 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 87d76dd5f1452b18226a5f19fb9565b9 0 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=87d76dd5f1452b18226a5f19fb9565b9 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.urY 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.urY 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.urY 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=d80b07cb67a623b52628fe7d5559328505d1a85cb1a8aac542a06ab5af9fa482 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.ob5 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key d80b07cb67a623b52628fe7d5559328505d1a85cb1a8aac542a06ab5af9fa482 3 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 d80b07cb67a623b52628fe7d5559328505d1a85cb1a8aac542a06ab5af9fa482 3 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=d80b07cb67a623b52628fe7d5559328505d1a85cb1a8aac542a06ab5af9fa482 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.ob5 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.ob5 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.ob5 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 4072046 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 4072046 ']' 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:39.163 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.at8 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.tEQ ]] 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tEQ 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.u3o 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.422 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.IXi ]] 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IXi 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.J9y 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.HzU ]] 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HzU 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.V8p 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.urY ]] 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.urY 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:21:39.681 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ob5 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:39.682 00:58:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:21:40.613 Waiting for block devices as requested 00:21:40.613 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:21:40.613 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:21:40.613 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:21:40.873 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:21:40.873 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:21:40.873 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:21:41.131 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:21:41.131 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:21:41.131 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:21:41.131 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:21:41.388 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:21:41.388 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:21:41.388 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:21:41.645 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:21:41.645 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:21:41.645 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:21:41.645 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:42.209 No valid GPT data, bailing 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:21:42.209 00:21:42.209 Discovery Log Number of Records 2, Generation counter 2 00:21:42.209 =====Discovery Log Entry 0====== 00:21:42.209 trtype: tcp 00:21:42.209 adrfam: ipv4 00:21:42.209 subtype: current discovery subsystem 00:21:42.209 treq: not specified, sq flow control disable supported 00:21:42.209 portid: 1 00:21:42.209 trsvcid: 4420 00:21:42.209 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:42.209 traddr: 10.0.0.1 00:21:42.209 eflags: none 00:21:42.209 sectype: none 00:21:42.209 =====Discovery Log Entry 1====== 00:21:42.209 trtype: tcp 00:21:42.209 adrfam: ipv4 00:21:42.209 subtype: nvme subsystem 00:21:42.209 treq: not specified, sq flow control disable supported 00:21:42.209 portid: 1 00:21:42.209 trsvcid: 4420 00:21:42.209 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:42.209 traddr: 10.0.0.1 00:21:42.209 eflags: none 00:21:42.209 sectype: none 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.209 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 nvme0n1 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 nvme0n1 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.467 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.724 nvme0n1 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.724 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.725 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.982 nvme0n1 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.982 00:58:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 nvme0n1 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 nvme0n1 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.241 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.499 nvme0n1 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.499 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.757 nvme0n1 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.757 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.758 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.017 nvme0n1 00:21:44.017 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.017 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:44.017 00:58:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.017 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.017 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.017 00:58:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.017 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.282 nvme0n1 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.282 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.540 nvme0n1 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.540 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.798 nvme0n1 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:44.798 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.057 00:58:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.315 nvme0n1 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:45.315 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.316 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.574 nvme0n1 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:45.574 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.575 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.844 nvme0n1 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.844 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.102 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.102 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:46.102 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.103 00:58:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.361 nvme0n1 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.361 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.928 nvme0n1 00:21:46.928 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.928 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:46.928 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:46.928 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.928 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.928 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.929 00:58:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:47.502 nvme0n1 00:21:47.502 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.503 00:58:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:48.440 nvme0n1 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.440 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:48.699 nvme0n1 00:21:48.699 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:48.955 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.956 00:58:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:49.521 nvme0n1 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.521 00:58:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:50.896 nvme0n1 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:50.896 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.897 00:58:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:51.830 nvme0n1 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:51.830 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.831 00:58:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:53.204 nvme0n1 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.204 00:58:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:54.139 nvme0n1 00:21:54.139 00:58:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.139 00:58:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:54.139 00:58:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:54.139 00:58:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.139 00:58:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.139 00:58:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.514 nvme0n1 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:55.514 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.515 nvme0n1 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.515 nvme0n1 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.515 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:21:55.774 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.775 nvme0n1 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.775 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.034 nvme0n1 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.034 00:58:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.293 nvme0n1 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.293 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.551 nvme0n1 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:56.551 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.552 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.552 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.552 nvme0n1 00:21:56.552 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.552 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.552 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:56.552 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.552 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.552 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.809 nvme0n1 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:56.809 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:57.067 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.068 00:58:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.068 nvme0n1 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.068 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.327 nvme0n1 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.327 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.586 nvme0n1 00:21:57.586 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.586 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.586 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.586 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:57.586 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.586 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.845 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.104 nvme0n1 00:21:58.104 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.104 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.104 00:58:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:58.104 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.104 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.104 00:58:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.104 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.362 nvme0n1 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:58.362 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:58.363 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:58.363 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.363 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.621 nvme0n1 00:21:58.621 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.880 00:58:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:59.139 nvme0n1 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.139 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:59.140 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.140 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:59.140 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:59.140 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:59.140 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.140 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.140 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:59.753 nvme0n1 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.753 00:58:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:00.326 nvme0n1 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.326 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:00.585 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:00.585 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:00.585 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:00.585 00:58:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:00.585 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.585 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.585 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:01.152 nvme0n1 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.152 00:58:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.152 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:01.719 nvme0n1 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.719 00:58:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:02.286 nvme0n1 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:02.286 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:02.287 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.287 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.287 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:02.287 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:02.543 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:02.543 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:02.543 00:58:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:02.543 00:58:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.543 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.543 00:58:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:03.474 nvme0n1 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:03.474 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.475 00:58:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:04.848 nvme0n1 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.848 00:58:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:05.788 nvme0n1 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.788 00:58:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:07.160 nvme0n1 00:22:07.160 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.160 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.160 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.160 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.161 00:58:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.095 nvme0n1 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.095 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.355 nvme0n1 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.355 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.615 nvme0n1 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.615 nvme0n1 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.615 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.873 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.873 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.874 nvme0n1 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.874 00:58:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.132 nvme0n1 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:09.132 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.133 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.391 nvme0n1 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.391 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.650 nvme0n1 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.650 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.908 nvme0n1 00:22:09.908 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.908 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.908 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.908 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:09.908 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:09.909 nvme0n1 00:22:09.909 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.167 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.167 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.167 00:58:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:10.167 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.167 00:58:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.167 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.168 nvme0n1 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.168 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.426 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.685 nvme0n1 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.685 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.944 nvme0n1 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:10.944 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.945 00:58:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:11.511 nvme0n1 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.511 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.512 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:11.770 nvme0n1 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:11.770 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.771 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:12.029 nvme0n1 00:22:12.029 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.029 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.029 00:58:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:12.029 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.029 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:12.029 00:58:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.029 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:12.596 nvme0n1 00:22:12.596 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.596 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:12.596 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.596 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.596 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:12.596 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:12.854 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.855 00:58:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 nvme0n1 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.422 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:13.988 nvme0n1 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:13.988 00:59:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.988 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:14.554 nvme0n1 00:22:14.554 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.554 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.554 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.554 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:14.554 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:14.554 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:14.812 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.813 00:59:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:15.379 nvme0n1 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDk1Y2FlNWNkM2IyZTVlOWY0ZmE1OWZmOWMzNDUyN2H4bhrE: 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: ]] 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:OWEwMDk3N2UzOWU1YjNlZjY4NGY0YmEzOTdjYjQxMWZjZDhlZDQ0ZTI0Yjg4YjlkYzIyY2NhZmUxMmM3NmRlNZRXhvg=: 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.379 00:59:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:16.754 nvme0n1 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.754 00:59:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:17.688 nvme0n1 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MzUwNzczZTYwYTcyM2QwNjBkZmIwM2I0MmI2MTkyZDmSYBl/: 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: ]] 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTBiZTZiYmQ1MjRjNGIwYjcxNWIyZTFkMzc0ODQ4NjEfFP2K: 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.688 00:59:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:19.064 nvme0n1 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MzI3NTg5ZWUyZDYyNTRmMjA3ZDBjMjJlZWFlYmFjYzJiMGYzNjE2MGFiMDRiNjcyfvwfBA==: 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: ]] 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkNzZkZDVmMTQ1MmIxODIyNmE1ZjE5ZmI5NTY1YjnXW7ya: 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.064 00:59:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:19.999 nvme0n1 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZDgwYjA3Y2I2N2E2MjNiNTI2MjhmZTdkNTU1OTMyODUwNWQxYTg1Y2IxYThhYWM1NDJhMDZhYjVhZjlmYTQ4MjFvRHA=: 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.999 00:59:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:21.374 nvme0n1 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y3NWI0YzE5Y2RlYjYxYjUyMzYwZWQ0NWFhNjYwZDI0ZGYwODA2NWNkZGQzMjBmKlLLlw==: 00:22:21.374 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:N2YzMDI1YWNkNjcxODEwZjY0ZGZhZjA3ZWZiNTdkZWIzMWNmN2IzZGI4ZDk2NDNl5Tu/sg==: 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:21.375 request: 00:22:21.375 { 00:22:21.375 "name": "nvme0", 00:22:21.375 "trtype": "tcp", 00:22:21.375 "traddr": "10.0.0.1", 00:22:21.375 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:21.375 "adrfam": "ipv4", 00:22:21.375 "trsvcid": "4420", 00:22:21.375 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:21.375 "method": "bdev_nvme_attach_controller", 00:22:21.375 "req_id": 1 00:22:21.375 } 00:22:21.375 Got JSON-RPC error response 00:22:21.375 response: 00:22:21.375 { 00:22:21.375 "code": -32602, 00:22:21.375 "message": "Invalid parameters" 00:22:21.375 } 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:21.375 request: 00:22:21.375 { 00:22:21.375 "name": "nvme0", 00:22:21.375 "trtype": "tcp", 00:22:21.375 "traddr": "10.0.0.1", 00:22:21.375 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:21.375 "adrfam": "ipv4", 00:22:21.375 "trsvcid": "4420", 00:22:21.375 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:21.375 "dhchap_key": "key2", 00:22:21.375 "method": "bdev_nvme_attach_controller", 00:22:21.375 "req_id": 1 00:22:21.375 } 00:22:21.375 Got JSON-RPC error response 00:22:21.375 response: 00:22:21.375 { 00:22:21.375 "code": -32602, 00:22:21.375 "message": "Invalid parameters" 00:22:21.375 } 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:21.375 request: 00:22:21.375 { 00:22:21.375 "name": "nvme0", 00:22:21.375 "trtype": "tcp", 00:22:21.375 "traddr": "10.0.0.1", 00:22:21.375 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:21.375 "adrfam": "ipv4", 00:22:21.375 "trsvcid": "4420", 00:22:21.375 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:21.375 "dhchap_key": "key1", 00:22:21.375 "dhchap_ctrlr_key": "ckey2", 00:22:21.375 "method": "bdev_nvme_attach_controller", 00:22:21.375 "req_id": 1 00:22:21.375 } 00:22:21.375 Got JSON-RPC error response 00:22:21.375 response: 00:22:21.375 { 00:22:21.375 "code": -32602, 00:22:21.375 "message": "Invalid parameters" 00:22:21.375 } 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:22:21.375 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.376 rmmod nvme_tcp 00:22:21.376 rmmod nvme_fabrics 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 4072046 ']' 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 4072046 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 4072046 ']' 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 4072046 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4072046 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4072046' 00:22:21.376 killing process with pid 4072046 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 4072046 00:22:21.376 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 4072046 00:22:21.635 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:21.635 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:21.635 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:21.635 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.635 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:21.635 00:59:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.635 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.635 00:59:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.541 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.541 00:59:10 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:23.541 00:59:10 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:23.541 00:59:10 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:22:23.541 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:23.541 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:22:23.799 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:23.799 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:23.799 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:23.799 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:23.799 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:23.799 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:23.799 00:59:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:24.735 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:22:24.735 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:22:24.735 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:22:24.735 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:22:24.735 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:22:24.735 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:22:24.735 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:22:24.735 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:22:24.735 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:22:24.735 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:22:24.735 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:22:24.735 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:22:24.735 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:22:24.735 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:22:24.735 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:22:24.735 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:22:25.672 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:22:25.672 00:59:12 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.at8 /tmp/spdk.key-null.u3o /tmp/spdk.key-sha256.J9y /tmp/spdk.key-sha384.V8p /tmp/spdk.key-sha512.ob5 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:22:25.672 00:59:12 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:26.606 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:22:26.606 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:22:26.606 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:22:26.606 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:22:26.606 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:22:26.606 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:22:26.607 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:22:26.607 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:22:26.607 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:22:26.607 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:22:26.607 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:22:26.607 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:22:26.607 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:22:26.607 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:22:26.607 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:22:26.607 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:22:26.607 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:22:26.607 00:22:26.607 real 0m49.961s 00:22:26.607 user 0m48.136s 00:22:26.607 sys 0m5.145s 00:22:26.607 00:59:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:26.607 00:59:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:26.607 ************************************ 00:22:26.607 END TEST nvmf_auth 00:22:26.607 ************************************ 00:22:26.607 00:59:13 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:22:26.607 00:59:13 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:26.607 00:59:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:26.607 00:59:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:26.607 00:59:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.607 ************************************ 00:22:26.607 START TEST nvmf_digest 00:22:26.607 ************************************ 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:26.607 * Looking for test storage... 00:22:26.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.607 00:59:13 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.864 00:59:13 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.865 00:59:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:28.244 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:28.244 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:28.244 Found net devices under 0000:08:00.0: cvl_0_0 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:28.244 Found net devices under 0000:08:00.1: cvl_0_1 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.244 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:22:28.503 00:22:28.503 --- 10.0.0.2 ping statistics --- 00:22:28.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.503 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:22:28.503 00:22:28.503 --- 10.0.0.1 ping statistics --- 00:22:28.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.503 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:28.503 ************************************ 00:22:28.503 START TEST nvmf_digest_clean 00:22:28.503 ************************************ 00:22:28.503 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=4079683 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 4079683 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4079683 ']' 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:28.504 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:28.504 [2024-05-15 00:59:15.507089] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:28.504 [2024-05-15 00:59:15.507197] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.504 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.762 [2024-05-15 00:59:15.573634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.762 [2024-05-15 00:59:15.688916] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.762 [2024-05-15 00:59:15.688986] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.762 [2024-05-15 00:59:15.689009] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.762 [2024-05-15 00:59:15.689023] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.762 [2024-05-15 00:59:15.689035] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.762 [2024-05-15 00:59:15.689070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.762 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:29.021 null0 00:22:29.021 [2024-05-15 00:59:15.870872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.021 [2024-05-15 00:59:15.894847] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:29.021 [2024-05-15 00:59:15.895104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4079712 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4079712 /var/tmp/bperf.sock 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4079712 ']' 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:29.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:29.021 00:59:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:29.021 [2024-05-15 00:59:15.944413] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:29.021 [2024-05-15 00:59:15.944516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079712 ] 00:22:29.021 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.021 [2024-05-15 00:59:16.005264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.279 [2024-05-15 00:59:16.125074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.279 00:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:29.279 00:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:22:29.279 00:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:29.279 00:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:29.279 00:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:29.538 00:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:29.538 00:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.180 nvme0n1 00:22:30.180 00:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:30.180 00:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:30.180 Running I/O for 2 seconds... 00:22:32.134 00:22:32.134 Latency(us) 00:22:32.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.134 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:32.134 nvme0n1 : 2.00 17448.49 68.16 0.00 0.00 7326.06 4174.89 14466.47 00:22:32.134 =================================================================================================================== 00:22:32.134 Total : 17448.49 68.16 0.00 0.00 7326.06 4174.89 14466.47 00:22:32.134 0 00:22:32.134 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:32.134 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:32.134 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:32.134 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:32.134 | select(.opcode=="crc32c") 00:22:32.134 | "\(.module_name) \(.executed)"' 00:22:32.134 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4079712 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4079712 ']' 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4079712 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4079712 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4079712' 00:22:32.698 killing process with pid 4079712 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4079712 00:22:32.698 Received shutdown signal, test time was about 2.000000 seconds 00:22:32.698 00:22:32.698 Latency(us) 00:22:32.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.698 =================================================================================================================== 00:22:32.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4079712 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4080110 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4080110 /var/tmp/bperf.sock 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4080110 ']' 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:32.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:32.698 00:59:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:32.956 [2024-05-15 00:59:19.777252] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:32.956 [2024-05-15 00:59:19.777348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080110 ] 00:22:32.956 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:32.956 Zero copy mechanism will not be used. 00:22:32.956 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.956 [2024-05-15 00:59:19.837864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.956 [2024-05-15 00:59:19.957577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.214 00:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:33.214 00:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:22:33.214 00:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:33.214 00:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:33.214 00:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:33.472 00:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.472 00:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.729 nvme0n1 00:22:33.729 00:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:33.729 00:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.986 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:33.986 Zero copy mechanism will not be used. 00:22:33.986 Running I/O for 2 seconds... 00:22:35.887 00:22:35.887 Latency(us) 00:22:35.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.887 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:35.887 nvme0n1 : 2.00 3137.23 392.15 0.00 0.00 5094.94 4903.06 14175.19 00:22:35.887 =================================================================================================================== 00:22:35.887 Total : 3137.23 392.15 0.00 0.00 5094.94 4903.06 14175.19 00:22:35.887 0 00:22:35.887 00:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:35.887 00:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:35.887 00:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:35.887 00:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:35.887 00:59:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:35.887 | select(.opcode=="crc32c") 00:22:35.887 | "\(.module_name) \(.executed)"' 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4080110 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4080110 ']' 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4080110 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:36.146 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4080110 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4080110' 00:22:36.404 killing process with pid 4080110 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4080110 00:22:36.404 Received shutdown signal, test time was about 2.000000 seconds 00:22:36.404 00:22:36.404 Latency(us) 00:22:36.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.404 =================================================================================================================== 00:22:36.404 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4080110 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4080428 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4080428 /var/tmp/bperf.sock 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4080428 ']' 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:36.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.404 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:36.661 [2024-05-15 00:59:23.485949] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:36.661 [2024-05-15 00:59:23.486050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080428 ] 00:22:36.661 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.661 [2024-05-15 00:59:23.546187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.661 [2024-05-15 00:59:23.662536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.918 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.918 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:22:36.918 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:36.918 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:36.918 00:59:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:37.176 00:59:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.176 00:59:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.435 nvme0n1 00:22:37.435 00:59:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:37.435 00:59:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.692 Running I/O for 2 seconds... 00:22:39.591 00:22:39.591 Latency(us) 00:22:39.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.591 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:39.591 nvme0n1 : 2.01 18413.78 71.93 0.00 0.00 6934.00 3835.07 14660.65 00:22:39.591 =================================================================================================================== 00:22:39.591 Total : 18413.78 71.93 0.00 0.00 6934.00 3835.07 14660.65 00:22:39.591 0 00:22:39.591 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:39.591 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:39.591 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:39.591 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:39.591 | select(.opcode=="crc32c") 00:22:39.591 | "\(.module_name) \(.executed)"' 00:22:39.591 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4080428 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4080428 ']' 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4080428 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4080428 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4080428' 00:22:40.158 killing process with pid 4080428 00:22:40.158 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4080428 00:22:40.159 Received shutdown signal, test time was about 2.000000 seconds 00:22:40.159 00:22:40.159 Latency(us) 00:22:40.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.159 =================================================================================================================== 00:22:40.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.159 00:59:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4080428 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4080825 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4080825 /var/tmp/bperf.sock 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 4080825 ']' 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:40.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:40.159 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:40.159 [2024-05-15 00:59:27.195322] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:40.159 [2024-05-15 00:59:27.195409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080825 ] 00:22:40.159 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:40.159 Zero copy mechanism will not be used. 00:22:40.417 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.417 [2024-05-15 00:59:27.254547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.417 [2024-05-15 00:59:27.371134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.417 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:40.417 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:22:40.417 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:40.417 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:40.417 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:40.983 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.983 00:59:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.552 nvme0n1 00:22:41.552 00:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:41.552 00:59:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:41.552 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:41.552 Zero copy mechanism will not be used. 00:22:41.552 Running I/O for 2 seconds... 00:22:43.456 00:22:43.456 Latency(us) 00:22:43.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.456 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:43.456 nvme0n1 : 2.01 2616.12 327.02 0.00 0.00 6100.52 4393.34 14272.28 00:22:43.457 =================================================================================================================== 00:22:43.457 Total : 2616.12 327.02 0.00 0.00 6100.52 4393.34 14272.28 00:22:43.457 0 00:22:43.457 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:43.457 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:43.457 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:43.457 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:43.457 | select(.opcode=="crc32c") 00:22:43.457 | "\(.module_name) \(.executed)"' 00:22:43.457 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:44.023 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:44.023 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:44.023 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:44.023 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:44.023 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4080825 00:22:44.023 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4080825 ']' 00:22:44.024 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4080825 00:22:44.024 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:22:44.024 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:44.024 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4080825 00:22:44.024 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:44.024 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:44.024 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4080825' 00:22:44.024 killing process with pid 4080825 00:22:44.024 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4080825 00:22:44.024 Received shutdown signal, test time was about 2.000000 seconds 00:22:44.024 00:22:44.024 Latency(us) 00:22:44.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.024 =================================================================================================================== 00:22:44.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.024 00:59:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4080825 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4079683 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 4079683 ']' 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 4079683 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4079683 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4079683' 00:22:44.024 killing process with pid 4079683 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 4079683 00:22:44.024 [2024-05-15 00:59:31.051397] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:44.024 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 4079683 00:22:44.283 00:22:44.283 real 0m15.817s 00:22:44.283 user 0m32.197s 00:22:44.283 sys 0m3.914s 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:44.283 ************************************ 00:22:44.283 END TEST nvmf_digest_clean 00:22:44.283 ************************************ 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:44.283 ************************************ 00:22:44.283 START TEST nvmf_digest_error 00:22:44.283 ************************************ 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:44.283 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:44.541 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=4081195 00:22:44.541 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:44.541 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 4081195 00:22:44.541 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4081195 ']' 00:22:44.541 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.541 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:44.541 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.541 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:44.541 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:44.541 [2024-05-15 00:59:31.393366] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:44.541 [2024-05-15 00:59:31.393463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.541 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.541 [2024-05-15 00:59:31.458175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.541 [2024-05-15 00:59:31.572960] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.541 [2024-05-15 00:59:31.573019] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.541 [2024-05-15 00:59:31.573034] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.541 [2024-05-15 00:59:31.573047] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.541 [2024-05-15 00:59:31.573059] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.541 [2024-05-15 00:59:31.573095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:44.800 [2024-05-15 00:59:31.673790] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:44.800 null0 00:22:44.800 [2024-05-15 00:59:31.781497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.800 [2024-05-15 00:59:31.805473] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:44.800 [2024-05-15 00:59:31.805724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4081278 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4081278 /var/tmp/bperf.sock 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4081278 ']' 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:44.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:44.800 00:59:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:44.800 [2024-05-15 00:59:31.856466] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:44.800 [2024-05-15 00:59:31.856566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081278 ] 00:22:45.059 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.059 [2024-05-15 00:59:31.916871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.059 [2024-05-15 00:59:32.033407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.317 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:45.317 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:22:45.317 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.317 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.575 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:45.575 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.575 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:45.575 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.575 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.575 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.833 nvme0n1 00:22:45.833 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:45.833 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.833 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:45.833 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.833 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:45.833 00:59:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:46.092 Running I/O for 2 seconds... 00:22:46.092 [2024-05-15 00:59:32.986025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:32.986081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:32.986104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.003709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.003745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.003764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.015954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.015990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.016009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.034132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.034170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.034200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.049571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.049606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.049625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.062298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.062332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.062351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.078110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.078143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.078161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.095393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.095428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.095447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.108635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.108670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.108689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.125187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.125223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.125243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.092 [2024-05-15 00:59:33.142196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.092 [2024-05-15 00:59:33.142232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.092 [2024-05-15 00:59:33.142252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.351 [2024-05-15 00:59:33.156153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.351 [2024-05-15 00:59:33.156189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.351 [2024-05-15 00:59:33.156208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.351 [2024-05-15 00:59:33.173400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.351 [2024-05-15 00:59:33.173436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.351 [2024-05-15 00:59:33.173455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.351 [2024-05-15 00:59:33.186212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.351 [2024-05-15 00:59:33.186246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.351 [2024-05-15 00:59:33.186264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.351 [2024-05-15 00:59:33.202125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.351 [2024-05-15 00:59:33.202159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.351 [2024-05-15 00:59:33.202178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.351 [2024-05-15 00:59:33.216688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.351 [2024-05-15 00:59:33.216723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.351 [2024-05-15 00:59:33.216741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.351 [2024-05-15 00:59:33.232737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.351 [2024-05-15 00:59:33.232770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.351 [2024-05-15 00:59:33.232789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.351 [2024-05-15 00:59:33.247590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.351 [2024-05-15 00:59:33.247623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.247641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.262402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.262435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.262453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.277024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.277058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.277077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.289378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.289411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.289440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.306606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.306640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.306659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.321599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.321632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.321650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.335055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.335105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.335124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.351696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.351730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.351748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.366561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.366594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.366612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.379037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.379070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.379089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.352 [2024-05-15 00:59:33.395555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.352 [2024-05-15 00:59:33.395591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.352 [2024-05-15 00:59:33.395609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.613 [2024-05-15 00:59:33.411099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.613 [2024-05-15 00:59:33.411134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-05-15 00:59:33.411152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.613 [2024-05-15 00:59:33.425939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.613 [2024-05-15 00:59:33.425981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.426000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.439206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.439241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.439259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.455940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.455973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.455991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.469617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.469650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.469668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.484471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.484504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.484521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.498977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.499011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.499029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.513448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.513481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.513499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.527513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.527550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.527570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.542003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.542037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.542055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.557862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.557895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.557914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.572278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.572314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.572333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.585776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.585810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.585828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.602175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.602210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.602228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.615730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.615765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.615783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.631222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.631256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.631274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.647633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.647667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.647685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.614 [2024-05-15 00:59:33.660600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.614 [2024-05-15 00:59:33.660633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-05-15 00:59:33.660651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.873 [2024-05-15 00:59:33.677112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.873 [2024-05-15 00:59:33.677144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.873 [2024-05-15 00:59:33.677171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.873 [2024-05-15 00:59:33.691452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.873 [2024-05-15 00:59:33.691484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.873 [2024-05-15 00:59:33.691503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.873 [2024-05-15 00:59:33.707344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.873 [2024-05-15 00:59:33.707377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.873 [2024-05-15 00:59:33.707395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.873 [2024-05-15 00:59:33.720601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.873 [2024-05-15 00:59:33.720634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.873 [2024-05-15 00:59:33.720652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.873 [2024-05-15 00:59:33.736116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.873 [2024-05-15 00:59:33.736157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.873 [2024-05-15 00:59:33.736175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.873 [2024-05-15 00:59:33.752410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.873 [2024-05-15 00:59:33.752450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.873 [2024-05-15 00:59:33.752468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.873 [2024-05-15 00:59:33.766162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.873 [2024-05-15 00:59:33.766194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.873 [2024-05-15 00:59:33.766213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.873 [2024-05-15 00:59:33.779679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.873 [2024-05-15 00:59:33.779711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.873 [2024-05-15 00:59:33.779729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.873 [2024-05-15 00:59:33.794972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.874 [2024-05-15 00:59:33.795007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.874 [2024-05-15 00:59:33.795025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.874 [2024-05-15 00:59:33.810832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.874 [2024-05-15 00:59:33.810865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.874 [2024-05-15 00:59:33.810883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.874 [2024-05-15 00:59:33.826833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.874 [2024-05-15 00:59:33.826865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.874 [2024-05-15 00:59:33.826884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.874 [2024-05-15 00:59:33.838646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.874 [2024-05-15 00:59:33.838678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.874 [2024-05-15 00:59:33.838696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.874 [2024-05-15 00:59:33.854786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.874 [2024-05-15 00:59:33.854818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.874 [2024-05-15 00:59:33.854836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.874 [2024-05-15 00:59:33.872136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.874 [2024-05-15 00:59:33.872168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.874 [2024-05-15 00:59:33.872186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.874 [2024-05-15 00:59:33.885180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.874 [2024-05-15 00:59:33.885212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.874 [2024-05-15 00:59:33.885231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.874 [2024-05-15 00:59:33.900546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.874 [2024-05-15 00:59:33.900577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.874 [2024-05-15 00:59:33.900596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.874 [2024-05-15 00:59:33.916859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:46.874 [2024-05-15 00:59:33.916891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.874 [2024-05-15 00:59:33.916910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.132 [2024-05-15 00:59:33.933801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.132 [2024-05-15 00:59:33.933833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.132 [2024-05-15 00:59:33.933857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.132 [2024-05-15 00:59:33.947008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.132 [2024-05-15 00:59:33.947040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.132 [2024-05-15 00:59:33.947059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.132 [2024-05-15 00:59:33.962657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.132 [2024-05-15 00:59:33.962690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.132 [2024-05-15 00:59:33.962708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.132 [2024-05-15 00:59:33.975848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.132 [2024-05-15 00:59:33.975881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.132 [2024-05-15 00:59:33.975900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.132 [2024-05-15 00:59:33.990624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.132 [2024-05-15 00:59:33.990670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.132 [2024-05-15 00:59:33.990688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.132 [2024-05-15 00:59:34.005810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.132 [2024-05-15 00:59:34.005848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.132 [2024-05-15 00:59:34.005866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.020035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.020066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.020084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.033789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.033827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.033845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.049505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.049537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.049555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.062527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.062572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.062591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.077656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.077704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.077722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.092487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.092520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.092538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.108832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.108870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.108888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.120791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.120823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.120841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.137724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.137762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.137781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.151500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.151537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.151555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.167820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.167867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.167887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.133 [2024-05-15 00:59:34.182776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.133 [2024-05-15 00:59:34.182816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.133 [2024-05-15 00:59:34.182834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.197972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.198012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.198030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.210770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.210802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.210820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.228066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.228110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.228128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.244015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.244047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.244067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.256730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.256762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.256780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.273377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.273409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.273427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.286810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.286849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.286867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.303805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.303846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.303864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.319377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.319409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.319434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.331943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.331975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.391 [2024-05-15 00:59:34.331994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.391 [2024-05-15 00:59:34.347146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.391 [2024-05-15 00:59:34.347178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.392 [2024-05-15 00:59:34.347197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.392 [2024-05-15 00:59:34.363030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.392 [2024-05-15 00:59:34.363062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.392 [2024-05-15 00:59:34.363080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.392 [2024-05-15 00:59:34.377346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.392 [2024-05-15 00:59:34.377378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.392 [2024-05-15 00:59:34.377397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.392 [2024-05-15 00:59:34.391277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.392 [2024-05-15 00:59:34.391317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.392 [2024-05-15 00:59:34.391335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.392 [2024-05-15 00:59:34.406244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.392 [2024-05-15 00:59:34.406276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.392 [2024-05-15 00:59:34.406295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.392 [2024-05-15 00:59:34.420488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.392 [2024-05-15 00:59:34.420519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.392 [2024-05-15 00:59:34.420538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.392 [2024-05-15 00:59:34.437182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.392 [2024-05-15 00:59:34.437214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.392 [2024-05-15 00:59:34.437233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.450295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.450341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.450359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.467535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.467576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.467594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.482985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.483017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.483036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.498499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.498532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.498550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.514036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.514073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.514092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.528093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.528131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.528150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.544473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.544506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.544524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.561828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.561861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.561880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.574584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.574616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.574640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.591141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.591174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.591192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.605155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.651 [2024-05-15 00:59:34.605188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.651 [2024-05-15 00:59:34.605207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.651 [2024-05-15 00:59:34.621473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.652 [2024-05-15 00:59:34.621506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.652 [2024-05-15 00:59:34.621525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.652 [2024-05-15 00:59:34.637208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.652 [2024-05-15 00:59:34.637242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.652 [2024-05-15 00:59:34.637260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.652 [2024-05-15 00:59:34.651743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.652 [2024-05-15 00:59:34.651776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.652 [2024-05-15 00:59:34.651794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.652 [2024-05-15 00:59:34.666434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.652 [2024-05-15 00:59:34.666467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.652 [2024-05-15 00:59:34.666485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.652 [2024-05-15 00:59:34.681063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.652 [2024-05-15 00:59:34.681096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.652 [2024-05-15 00:59:34.681114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.652 [2024-05-15 00:59:34.695983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.652 [2024-05-15 00:59:34.696016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.652 [2024-05-15 00:59:34.696035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.652 [2024-05-15 00:59:34.708542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.652 [2024-05-15 00:59:34.708583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.652 [2024-05-15 00:59:34.708602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.725968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.725999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.726017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.741879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.741910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.741928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.758978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.759009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.759027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.772031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.772063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.772081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.789159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.789191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.789209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.803129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.803161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.803180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.818659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.818692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.818711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.833231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.833262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.833280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.850528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.850566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.850585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.862332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.862364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.862382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.879981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.880012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.880031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.896495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.896527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.910 [2024-05-15 00:59:34.896545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.910 [2024-05-15 00:59:34.908424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.910 [2024-05-15 00:59:34.908455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.911 [2024-05-15 00:59:34.908474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.911 [2024-05-15 00:59:34.924316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.911 [2024-05-15 00:59:34.924347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.911 [2024-05-15 00:59:34.924365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.911 [2024-05-15 00:59:34.938159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.911 [2024-05-15 00:59:34.938191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.911 [2024-05-15 00:59:34.938209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.911 [2024-05-15 00:59:34.955047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.911 [2024-05-15 00:59:34.955078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.911 [2024-05-15 00:59:34.955096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.911 [2024-05-15 00:59:34.967350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9a180) 00:22:47.911 [2024-05-15 00:59:34.967382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.911 [2024-05-15 00:59:34.967410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.169 00:22:48.169 Latency(us) 00:22:48.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.169 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:48.169 nvme0n1 : 2.00 17002.85 66.42 0.00 0.00 7517.55 3907.89 20971.52 00:22:48.169 =================================================================================================================== 00:22:48.169 Total : 17002.85 66.42 0.00 0.00 7517.55 3907.89 20971.52 00:22:48.169 0 00:22:48.169 00:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:48.169 00:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:48.169 00:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:48.169 00:59:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:48.169 | .driver_specific 00:22:48.169 | .nvme_error 00:22:48.169 | .status_code 00:22:48.169 | .command_transient_transport_error' 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 133 > 0 )) 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4081278 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4081278 ']' 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4081278 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4081278 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4081278' 00:22:48.428 killing process with pid 4081278 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4081278 00:22:48.428 Received shutdown signal, test time was about 2.000000 seconds 00:22:48.428 00:22:48.428 Latency(us) 00:22:48.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.428 =================================================================================================================== 00:22:48.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.428 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4081278 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4081598 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4081598 /var/tmp/bperf.sock 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4081598 ']' 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:48.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:48.686 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:48.686 [2024-05-15 00:59:35.566791] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:48.687 [2024-05-15 00:59:35.566885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081598 ] 00:22:48.687 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:48.687 Zero copy mechanism will not be used. 00:22:48.687 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.687 [2024-05-15 00:59:35.626206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.687 [2024-05-15 00:59:35.742806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.945 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:48.945 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:22:48.945 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:48.945 00:59:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:49.204 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:49.204 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.204 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:49.204 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.204 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:49.204 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:49.770 nvme0n1 00:22:49.770 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:49.770 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.770 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:49.770 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.770 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:49.770 00:59:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:49.770 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:49.770 Zero copy mechanism will not be used. 00:22:49.770 Running I/O for 2 seconds... 00:22:49.770 [2024-05-15 00:59:36.675267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.770 [2024-05-15 00:59:36.675334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.770 [2024-05-15 00:59:36.675366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.770 [2024-05-15 00:59:36.685671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.770 [2024-05-15 00:59:36.685706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.770 [2024-05-15 00:59:36.685725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.770 [2024-05-15 00:59:36.695896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.770 [2024-05-15 00:59:36.695929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.770 [2024-05-15 00:59:36.695966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.770 [2024-05-15 00:59:36.706127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.770 [2024-05-15 00:59:36.706161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.770 [2024-05-15 00:59:36.706180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.770 [2024-05-15 00:59:36.716355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.770 [2024-05-15 00:59:36.716387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.770 [2024-05-15 00:59:36.716405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.770 [2024-05-15 00:59:36.726566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.770 [2024-05-15 00:59:36.726599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.726618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.771 [2024-05-15 00:59:36.736827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.771 [2024-05-15 00:59:36.736859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.736877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.771 [2024-05-15 00:59:36.747059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.771 [2024-05-15 00:59:36.747091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.747110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.771 [2024-05-15 00:59:36.757213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.771 [2024-05-15 00:59:36.757246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.757264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.771 [2024-05-15 00:59:36.767444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.771 [2024-05-15 00:59:36.767476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.767494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.771 [2024-05-15 00:59:36.777729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.771 [2024-05-15 00:59:36.777762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.777780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.771 [2024-05-15 00:59:36.787899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.771 [2024-05-15 00:59:36.787938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.787959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.771 [2024-05-15 00:59:36.798090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.771 [2024-05-15 00:59:36.798123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.798140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.771 [2024-05-15 00:59:36.808296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.771 [2024-05-15 00:59:36.808330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.808347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.771 [2024-05-15 00:59:36.818567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:49.771 [2024-05-15 00:59:36.818600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.771 [2024-05-15 00:59:36.818619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.828915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.828956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.828984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.839296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.839339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.839357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.849582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.849614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.849648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.859857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.859889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.859908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.870202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.870236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.870254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.880501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.880534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.880552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.890694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.890727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.890745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.900955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.900987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.901005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.911175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.911209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.911227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.921496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.921528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.921547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.931663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.931695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.931713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.941910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.941962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.941981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.952144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.952177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.952194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.962380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.962412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.962430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.030 [2024-05-15 00:59:36.972567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.030 [2024-05-15 00:59:36.972600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.030 [2024-05-15 00:59:36.972618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:36.982790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:36.982823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:36.982841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:36.992972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:36.993004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:36.993022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:37.003162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:37.003195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:37.003213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:37.013576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:37.013609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:37.013626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:37.023762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:37.023802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:37.023821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:37.033950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:37.033983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:37.034000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:37.044148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:37.044179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:37.044198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:37.054368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:37.054409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:37.054427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:37.064581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:37.064614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:37.064632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:37.074959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:37.074992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:37.075010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.031 [2024-05-15 00:59:37.085171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.031 [2024-05-15 00:59:37.085205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.031 [2024-05-15 00:59:37.085224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.095642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.095676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.095693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.105970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.106003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.106021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.116180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.116218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.116237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.126433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.126466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.126484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.136636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.136668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.136686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.146824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.146855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.146873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.157072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.157104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.157122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.167306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.167340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.167358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.177533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.177566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.177584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.187724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.187763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.187781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.198072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.198104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.198122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.208274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.208314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.290 [2024-05-15 00:59:37.208332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.290 [2024-05-15 00:59:37.218531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.290 [2024-05-15 00:59:37.218563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.218581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.228784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.228817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.228834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.239008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.239041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.239058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.249202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.249235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.249252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.259590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.259622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.259639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.269761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.269794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.269811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.279949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.279989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.280006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.290124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.290157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.290182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.300312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.300352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.300370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.310505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.310536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.310554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.320692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.320724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.320741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.330951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.330983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.331000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.291 [2024-05-15 00:59:37.341184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.291 [2024-05-15 00:59:37.341216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.291 [2024-05-15 00:59:37.341234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.550 [2024-05-15 00:59:37.351471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.550 [2024-05-15 00:59:37.351504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.550 [2024-05-15 00:59:37.351522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.550 [2024-05-15 00:59:37.361849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.550 [2024-05-15 00:59:37.361882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.550 [2024-05-15 00:59:37.361899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.550 [2024-05-15 00:59:37.372044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.550 [2024-05-15 00:59:37.372077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.550 [2024-05-15 00:59:37.372095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.550 [2024-05-15 00:59:37.382278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.382317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.382335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.392742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.392775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.392793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.402956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.402988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.403006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.413137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.413170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.413187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.423320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.423353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.423370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.433501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.433532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.433550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.443674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.443714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.443732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.453831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.453863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.453880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.464037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.464070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.464094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.474261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.474293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.474311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.484470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.484503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.484522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.494615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.494655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.494673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.504833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.504873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.504890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.515009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.515042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.515059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.525276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.525314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.525332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.535421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.535460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.535478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.545583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.545616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.545634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.555795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.555835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.555853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.566131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.566164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.566182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.576344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.576377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.576394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.586522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.586554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.586572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.596677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.596710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.596728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.551 [2024-05-15 00:59:37.606951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.551 [2024-05-15 00:59:37.606985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.551 [2024-05-15 00:59:37.607004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.810 [2024-05-15 00:59:37.617239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.810 [2024-05-15 00:59:37.617273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.810 [2024-05-15 00:59:37.617291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.810 [2024-05-15 00:59:37.627521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.810 [2024-05-15 00:59:37.627553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.810 [2024-05-15 00:59:37.627571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.810 [2024-05-15 00:59:37.637778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.810 [2024-05-15 00:59:37.637811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.810 [2024-05-15 00:59:37.637829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.810 [2024-05-15 00:59:37.647970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.810 [2024-05-15 00:59:37.648003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.810 [2024-05-15 00:59:37.648021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.810 [2024-05-15 00:59:37.658198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.810 [2024-05-15 00:59:37.658231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.810 [2024-05-15 00:59:37.658248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.668384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.668416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.668434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.678675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.678708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.678726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.688900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.688949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.688968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.699241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.699274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.699291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.709483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.709515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.709533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.719718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.719750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.719768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.729922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.729962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.729987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.740153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.740185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.740203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.750564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.750596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.750614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.760785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.760817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.760834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.771025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.771056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.771073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.781259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.781290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.781307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.791519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.791551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.791569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.801728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.801762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.801780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.811961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.811992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.812010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.822189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.822221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.822240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.832444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.832476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.832494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.842627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.842658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.842676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.852867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.852899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.852917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.811 [2024-05-15 00:59:37.863087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:50.811 [2024-05-15 00:59:37.863119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.811 [2024-05-15 00:59:37.863137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.873448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.873482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.873500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.883743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.883774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.883792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.893968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.894000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.894018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.904207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.904239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.904264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.914404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.914437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.914455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.924595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.924627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.924645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.934823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.934857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.934874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.945043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.945075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.945093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.955256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.955288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.955306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.965534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.965569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.965587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.975825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.975858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.975876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.986068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.986100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.986117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:37.996393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:37.996437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:37.996456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:38.006654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:38.006686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:38.006704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:38.016870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:38.016902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:38.016920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:38.027183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:38.027215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.071 [2024-05-15 00:59:38.027232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.071 [2024-05-15 00:59:38.037414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.071 [2024-05-15 00:59:38.037447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.072 [2024-05-15 00:59:38.037464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.072 [2024-05-15 00:59:38.047615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.072 [2024-05-15 00:59:38.047649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.072 [2024-05-15 00:59:38.047667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.072 [2024-05-15 00:59:38.057793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.072 [2024-05-15 00:59:38.057825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.072 [2024-05-15 00:59:38.057843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.072 [2024-05-15 00:59:38.067993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.072 [2024-05-15 00:59:38.068025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.072 [2024-05-15 00:59:38.068043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.072 [2024-05-15 00:59:38.078182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.072 [2024-05-15 00:59:38.078213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.072 [2024-05-15 00:59:38.078231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.072 [2024-05-15 00:59:38.088576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.072 [2024-05-15 00:59:38.088607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.072 [2024-05-15 00:59:38.088625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.072 [2024-05-15 00:59:38.098754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.072 [2024-05-15 00:59:38.098785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.072 [2024-05-15 00:59:38.098802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.072 [2024-05-15 00:59:38.108911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.072 [2024-05-15 00:59:38.108951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.072 [2024-05-15 00:59:38.108970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.072 [2024-05-15 00:59:38.119092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.072 [2024-05-15 00:59:38.119123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.072 [2024-05-15 00:59:38.119141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.331 [2024-05-15 00:59:38.129458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.331 [2024-05-15 00:59:38.129492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.331 [2024-05-15 00:59:38.129509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.331 [2024-05-15 00:59:38.139786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.331 [2024-05-15 00:59:38.139818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.331 [2024-05-15 00:59:38.139836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.331 [2024-05-15 00:59:38.150060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.331 [2024-05-15 00:59:38.150093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.331 [2024-05-15 00:59:38.150111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.331 [2024-05-15 00:59:38.160300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.331 [2024-05-15 00:59:38.160331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.331 [2024-05-15 00:59:38.160349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.331 [2024-05-15 00:59:38.170502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.331 [2024-05-15 00:59:38.170534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.331 [2024-05-15 00:59:38.170559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.331 [2024-05-15 00:59:38.180718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.331 [2024-05-15 00:59:38.180751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.331 [2024-05-15 00:59:38.180769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.331 [2024-05-15 00:59:38.190979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.191011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.191029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.201215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.201247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.201265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.211529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.211562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.211579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.221736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.221770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.221789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.231981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.232014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.232031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.242176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.242207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.242225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.252404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.252435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.252453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.262599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.262630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.262649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.272817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.272848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.272865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.283036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.283067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.283085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.293258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.293290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.293308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.303529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.303561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.303578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.313747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.313787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.313804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.323965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.323997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.324014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.334129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.334160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.334179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.344373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.344405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.344429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.354607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.354639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.354657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.364791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.364822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.364840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.375009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.375041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.375059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.332 [2024-05-15 00:59:38.385262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.332 [2024-05-15 00:59:38.385295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.332 [2024-05-15 00:59:38.385312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.591 [2024-05-15 00:59:38.395588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.591 [2024-05-15 00:59:38.395621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.591 [2024-05-15 00:59:38.395639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.591 [2024-05-15 00:59:38.405809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.591 [2024-05-15 00:59:38.405841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.591 [2024-05-15 00:59:38.405860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.591 [2024-05-15 00:59:38.416080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.591 [2024-05-15 00:59:38.416113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.591 [2024-05-15 00:59:38.416131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.591 [2024-05-15 00:59:38.426288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.591 [2024-05-15 00:59:38.426319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.591 [2024-05-15 00:59:38.426337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.591 [2024-05-15 00:59:38.436563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.591 [2024-05-15 00:59:38.436601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.591 [2024-05-15 00:59:38.436620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.591 [2024-05-15 00:59:38.446723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.591 [2024-05-15 00:59:38.446755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.591 [2024-05-15 00:59:38.446773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.591 [2024-05-15 00:59:38.456972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.591 [2024-05-15 00:59:38.457003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.591 [2024-05-15 00:59:38.457021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.467137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.467170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.467187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.477398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.477430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.477448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.487591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.487623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.487640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.497791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.497823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.497841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.508068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.508100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.508118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.518432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.518464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.518481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.528655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.528688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.528705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.538899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.538944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.538964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.549335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.549368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.549386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.559654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.559685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.559703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.569823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.569855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.569873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.579981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.580013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.580031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.590174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.590213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.590230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.600334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.600367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.600385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.610571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.610610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.610634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.620745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.620778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.620796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.630993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.631024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.631042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.592 [2024-05-15 00:59:38.641188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.592 [2024-05-15 00:59:38.641219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.592 [2024-05-15 00:59:38.641237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.851 [2024-05-15 00:59:38.651510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.851 [2024-05-15 00:59:38.651543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.851 [2024-05-15 00:59:38.651561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.851 [2024-05-15 00:59:38.661755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5cc00) 00:22:51.851 [2024-05-15 00:59:38.661788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.851 [2024-05-15 00:59:38.661806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.851 00:22:51.851 Latency(us) 00:22:51.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.851 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:51.851 nvme0n1 : 2.01 3022.85 377.86 0.00 0.00 5287.72 5024.43 13107.20 00:22:51.851 =================================================================================================================== 00:22:51.851 Total : 3022.85 377.86 0.00 0.00 5287.72 5024.43 13107.20 00:22:51.851 0 00:22:51.851 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:51.851 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:51.851 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:51.851 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:51.851 | .driver_specific 00:22:51.851 | .nvme_error 00:22:51.851 | .status_code 00:22:51.851 | .command_transient_transport_error' 00:22:52.110 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:22:52.110 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4081598 00:22:52.110 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4081598 ']' 00:22:52.110 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4081598 00:22:52.110 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:22:52.110 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:52.110 00:59:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4081598 00:22:52.110 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:52.110 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:52.110 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4081598' 00:22:52.110 killing process with pid 4081598 00:22:52.110 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4081598 00:22:52.110 Received shutdown signal, test time was about 2.000000 seconds 00:22:52.110 00:22:52.110 Latency(us) 00:22:52.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.110 =================================================================================================================== 00:22:52.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.110 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4081598 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4081991 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4081991 /var/tmp/bperf.sock 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4081991 ']' 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:52.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.368 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:52.368 [2024-05-15 00:59:39.267816] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:52.368 [2024-05-15 00:59:39.267912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081991 ] 00:22:52.368 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.368 [2024-05-15 00:59:39.329374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.626 [2024-05-15 00:59:39.446746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.627 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:52.627 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:22:52.627 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:52.627 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:52.885 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:52.885 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.885 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:52.885 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.885 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:52.885 00:59:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:53.452 nvme0n1 00:22:53.452 00:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:53.452 00:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.452 00:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:53.452 00:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.452 00:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:53.452 00:59:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:53.452 Running I/O for 2 seconds... 00:22:53.452 [2024-05-15 00:59:40.484154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ed920 00:22:53.452 [2024-05-15 00:59:40.485372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.452 [2024-05-15 00:59:40.485416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:53.452 [2024-05-15 00:59:40.497248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f9f68 00:22:53.452 [2024-05-15 00:59:40.498442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.452 [2024-05-15 00:59:40.498475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.511920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e38d0 00:22:53.711 [2024-05-15 00:59:40.513377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.513409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.526447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e4de8 00:22:53.711 [2024-05-15 00:59:40.528044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.528075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.540834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e7818 00:22:53.711 [2024-05-15 00:59:40.542617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.542648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.555243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f9f68 00:22:53.711 [2024-05-15 00:59:40.557196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.557228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.569587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f0788 00:22:53.711 [2024-05-15 00:59:40.571729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.571760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.583924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ed0b0 00:22:53.711 [2024-05-15 00:59:40.586308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.586339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.593708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fc128 00:22:53.711 [2024-05-15 00:59:40.594706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.594736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.606665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ec408 00:22:53.711 [2024-05-15 00:59:40.607658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.607688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.621042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f92c0 00:22:53.711 [2024-05-15 00:59:40.622223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.622254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.636303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190eaef0 00:22:53.711 [2024-05-15 00:59:40.637698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.637728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.649997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fcdd0 00:22:53.711 [2024-05-15 00:59:40.651384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.651415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.663708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e6b70 00:22:53.711 [2024-05-15 00:59:40.665105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.665135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.677914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190edd58 00:22:53.711 [2024-05-15 00:59:40.679491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.679527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.693509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f31b8 00:22:53.711 [2024-05-15 00:59:40.695823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.695853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.703266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ea680 00:22:53.711 [2024-05-15 00:59:40.704241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.704271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.716272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f2510 00:22:53.711 [2024-05-15 00:59:40.717239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.717269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.730633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e38d0 00:22:53.711 [2024-05-15 00:59:40.731801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.731835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.744999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f8a50 00:22:53.711 [2024-05-15 00:59:40.746357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.746387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:53.711 [2024-05-15 00:59:40.759371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190de8a8 00:22:53.711 [2024-05-15 00:59:40.760926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.711 [2024-05-15 00:59:40.760962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:53.970 [2024-05-15 00:59:40.774018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f6cc8 00:22:53.970 [2024-05-15 00:59:40.775753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.970 [2024-05-15 00:59:40.775784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:53.970 [2024-05-15 00:59:40.788388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e38d0 00:22:53.970 [2024-05-15 00:59:40.790336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.970 [2024-05-15 00:59:40.790367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:53.970 [2024-05-15 00:59:40.802732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190df550 00:22:53.970 [2024-05-15 00:59:40.804866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.970 [2024-05-15 00:59:40.804896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:53.970 [2024-05-15 00:59:40.817146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e6738 00:22:53.970 [2024-05-15 00:59:40.819493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.970 [2024-05-15 00:59:40.819523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:53.970 [2024-05-15 00:59:40.826910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fe720 00:22:53.970 [2024-05-15 00:59:40.827909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.970 [2024-05-15 00:59:40.827955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:53.970 [2024-05-15 00:59:40.840788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190eea00 00:22:53.970 [2024-05-15 00:59:40.841795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.970 [2024-05-15 00:59:40.841830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:53.970 [2024-05-15 00:59:40.853615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fd640 00:22:53.970 [2024-05-15 00:59:40.854606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.970 [2024-05-15 00:59:40.854643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:53.970 [2024-05-15 00:59:40.867968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e0ea0 00:22:53.970 [2024-05-15 00:59:40.869144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.970 [2024-05-15 00:59:40.869174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:53.970 [2024-05-15 00:59:40.883354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ff3c8 00:22:53.970 [2024-05-15 00:59:40.884739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:40.884773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:40.897143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fcdd0 00:22:53.971 [2024-05-15 00:59:40.898538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:40.898575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:40.910928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f7100 00:22:53.971 [2024-05-15 00:59:40.912333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:40.912364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:40.923786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190de038 00:22:53.971 [2024-05-15 00:59:40.925180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:40.925210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:40.939142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fe2e8 00:22:53.971 [2024-05-15 00:59:40.940721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:40.940751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:40.952876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ed0b0 00:22:53.971 [2024-05-15 00:59:40.954458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:40.954489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:40.966643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ebfd0 00:22:53.971 [2024-05-15 00:59:40.968235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:40.968265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:40.980398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f2948 00:22:53.971 [2024-05-15 00:59:40.981974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:40.982005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:40.995799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f3e60 00:22:53.971 [2024-05-15 00:59:40.998147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:40.998176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:41.005624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e8088 00:22:53.971 [2024-05-15 00:59:41.006589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:41.006620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:53.971 [2024-05-15 00:59:41.019600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e9168 00:22:53.971 [2024-05-15 00:59:41.020581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.971 [2024-05-15 00:59:41.020610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.033664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fcdd0 00:22:54.230 [2024-05-15 00:59:41.034638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.034668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.046492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e0a68 00:22:54.230 [2024-05-15 00:59:41.047448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.047478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.060923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fe2e8 00:22:54.230 [2024-05-15 00:59:41.062075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.062104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.075287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190df988 00:22:54.230 [2024-05-15 00:59:41.076629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.076659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.089684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190df550 00:22:54.230 [2024-05-15 00:59:41.091218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.091249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.104079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e2c28 00:22:54.230 [2024-05-15 00:59:41.105808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.105839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.118443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fe2e8 00:22:54.230 [2024-05-15 00:59:41.120368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.120399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.132900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190eaab8 00:22:54.230 [2024-05-15 00:59:41.135142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.135173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.147428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f7538 00:22:54.230 [2024-05-15 00:59:41.149740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.149771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.157235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190feb58 00:22:54.230 [2024-05-15 00:59:41.158204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.158235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.171626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f6020 00:22:54.230 [2024-05-15 00:59:41.172777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.172809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.186041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e88f8 00:22:54.230 [2024-05-15 00:59:41.187390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.187421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.199096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e3498 00:22:54.230 [2024-05-15 00:59:41.200443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.200474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.213521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e3060 00:22:54.230 [2024-05-15 00:59:41.215063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.215096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.227973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f4298 00:22:54.230 [2024-05-15 00:59:41.229700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.229731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.242391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f0788 00:22:54.230 [2024-05-15 00:59:41.244333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.244364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.256823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ea680 00:22:54.230 [2024-05-15 00:59:41.258940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.258979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.271338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f35f0 00:22:54.230 [2024-05-15 00:59:41.273645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.273676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.230 [2024-05-15 00:59:41.281190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5ec8 00:22:54.230 [2024-05-15 00:59:41.282154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.230 [2024-05-15 00:59:41.282184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.489 [2024-05-15 00:59:41.295914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fb8b8 00:22:54.489 [2024-05-15 00:59:41.297107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.489 [2024-05-15 00:59:41.297137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.489 [2024-05-15 00:59:41.309034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5220 00:22:54.489 [2024-05-15 00:59:41.310198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.489 [2024-05-15 00:59:41.310228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.489 [2024-05-15 00:59:41.323492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e3060 00:22:54.489 [2024-05-15 00:59:41.324848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.489 [2024-05-15 00:59:41.324878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.489 [2024-05-15 00:59:41.337913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e3498 00:22:54.489 [2024-05-15 00:59:41.339446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.339476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.352349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ed4e8 00:22:54.490 [2024-05-15 00:59:41.354078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.354109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.366738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5220 00:22:54.490 [2024-05-15 00:59:41.368648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.368679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.381175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ea248 00:22:54.490 [2024-05-15 00:59:41.383334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.383365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.395647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e4de8 00:22:54.490 [2024-05-15 00:59:41.397955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.397995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.405538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190eee38 00:22:54.490 [2024-05-15 00:59:41.406532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.406562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.418591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f6890 00:22:54.490 [2024-05-15 00:59:41.419540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.419571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.433184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e27f0 00:22:54.490 [2024-05-15 00:59:41.434336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.434367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.447608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e6fa8 00:22:54.490 [2024-05-15 00:59:41.449006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.449037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.462993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e23b8 00:22:54.490 [2024-05-15 00:59:41.464564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.464595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.477206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e9168 00:22:54.490 [2024-05-15 00:59:41.478986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.479017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.490255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f1430 00:22:54.490 [2024-05-15 00:59:41.492023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.492053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.504723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5ec8 00:22:54.490 [2024-05-15 00:59:41.506679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.506709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.519148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e0ea0 00:22:54.490 [2024-05-15 00:59:41.521309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.521339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.533537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e6738 00:22:54.490 [2024-05-15 00:59:41.535872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.535902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.490 [2024-05-15 00:59:41.543418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e0a68 00:22:54.490 [2024-05-15 00:59:41.544409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.490 [2024-05-15 00:59:41.544439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.558067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f5be8 00:22:54.749 [2024-05-15 00:59:41.559223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.559253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.571079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ec840 00:22:54.749 [2024-05-15 00:59:41.572230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.572261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.585489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e3060 00:22:54.749 [2024-05-15 00:59:41.586821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.586852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.599863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e3498 00:22:54.749 [2024-05-15 00:59:41.601420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.601450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.614275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190edd58 00:22:54.749 [2024-05-15 00:59:41.616001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.628615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ec840 00:22:54.749 [2024-05-15 00:59:41.630557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.630588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.643025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e12d8 00:22:54.749 [2024-05-15 00:59:41.645135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.645165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.655824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f4b08 00:22:54.749 [2024-05-15 00:59:41.657371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.657401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.668285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f8618 00:22:54.749 [2024-05-15 00:59:41.670331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.670361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.679985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f81e0 00:22:54.749 [2024-05-15 00:59:41.680938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.680968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.694414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5ec8 00:22:54.749 [2024-05-15 00:59:41.695583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.695616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.708793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f8e88 00:22:54.749 [2024-05-15 00:59:41.710189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.710219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.723168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f92c0 00:22:54.749 [2024-05-15 00:59:41.724711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.724741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.737556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e4578 00:22:54.749 [2024-05-15 00:59:41.739337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.739367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.751919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5ec8 00:22:54.749 [2024-05-15 00:59:41.753862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.753892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.766265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f2948 00:22:54.749 [2024-05-15 00:59:41.768416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.749 [2024-05-15 00:59:41.768446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.749 [2024-05-15 00:59:41.780659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190eaab8 00:22:54.750 [2024-05-15 00:59:41.782976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.750 [2024-05-15 00:59:41.783006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.750 [2024-05-15 00:59:41.790441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f31b8 00:22:54.750 [2024-05-15 00:59:41.791416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.750 [2024-05-15 00:59:41.791455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.750 [2024-05-15 00:59:41.804962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fb8b8 00:22:54.750 [2024-05-15 00:59:41.806132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.750 [2024-05-15 00:59:41.806162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:55.008 [2024-05-15 00:59:41.819429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e9e10 00:22:55.008 [2024-05-15 00:59:41.820785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.008 [2024-05-15 00:59:41.820815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:55.008 [2024-05-15 00:59:41.833396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190eee38 00:22:55.008 [2024-05-15 00:59:41.834778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.008 [2024-05-15 00:59:41.834808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:55.008 [2024-05-15 00:59:41.847501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5a90 00:22:55.008 [2024-05-15 00:59:41.848662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.008 [2024-05-15 00:59:41.848695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:55.008 [2024-05-15 00:59:41.861437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e01f8 00:22:55.008 [2024-05-15 00:59:41.862970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.008 [2024-05-15 00:59:41.863009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:55.008 [2024-05-15 00:59:41.875509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190eea00 00:22:55.008 [2024-05-15 00:59:41.877228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.008 [2024-05-15 00:59:41.877262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:55.008 [2024-05-15 00:59:41.886754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ef270 00:22:55.008 [2024-05-15 00:59:41.887688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.008 [2024-05-15 00:59:41.887723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:55.008 [2024-05-15 00:59:41.900874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f0ff8 00:22:55.008 [2024-05-15 00:59:41.902030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.008 [2024-05-15 00:59:41.902063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:55.008 [2024-05-15 00:59:41.913872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fc128 00:22:55.008 [2024-05-15 00:59:41.914989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.008 [2024-05-15 00:59:41.915018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:55.008 [2024-05-15 00:59:41.928301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f8a50 00:22:55.008 [2024-05-15 00:59:41.929618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.008 [2024-05-15 00:59:41.929647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:41.942617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fb480 00:22:55.009 [2024-05-15 00:59:41.944141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.009 [2024-05-15 00:59:41.944176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:41.956964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fd208 00:22:55.009 [2024-05-15 00:59:41.958694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.009 [2024-05-15 00:59:41.958724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:41.969824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fc128 00:22:55.009 [2024-05-15 00:59:41.970937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.009 [2024-05-15 00:59:41.970972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:41.983731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e38d0 00:22:55.009 [2024-05-15 00:59:41.984661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.009 [2024-05-15 00:59:41.984697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:41.999464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fef90 00:22:55.009 [2024-05-15 00:59:42.001549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.009 [2024-05-15 00:59:42.001582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:42.012315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e6738 00:22:55.009 [2024-05-15 00:59:42.013820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.009 [2024-05-15 00:59:42.013850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:42.024815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e3060 00:22:55.009 [2024-05-15 00:59:42.026831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.009 [2024-05-15 00:59:42.026861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:42.036621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190efae0 00:22:55.009 [2024-05-15 00:59:42.037547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.009 [2024-05-15 00:59:42.037576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:42.051014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e99d8 00:22:55.009 [2024-05-15 00:59:42.052134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.009 [2024-05-15 00:59:42.052163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:55.009 [2024-05-15 00:59:42.065474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190eff18 00:22:55.267 [2024-05-15 00:59:42.066869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.066899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.079981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f20d8 00:22:55.267 [2024-05-15 00:59:42.081493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.081522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.092847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ddc00 00:22:55.267 [2024-05-15 00:59:42.093816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.093846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.105408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e84c0 00:22:55.267 [2024-05-15 00:59:42.106345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.106378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.120710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e88f8 00:22:55.267 [2024-05-15 00:59:42.121830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.121861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.134871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f0bc0 00:22:55.267 [2024-05-15 00:59:42.136210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.136247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.150455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f4b08 00:22:55.267 [2024-05-15 00:59:42.152520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.152552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.164861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5220 00:22:55.267 [2024-05-15 00:59:42.167185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.167216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.174725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190eea00 00:22:55.267 [2024-05-15 00:59:42.175668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.175698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.187738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f92c0 00:22:55.267 [2024-05-15 00:59:42.188647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.188678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.202153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5ec8 00:22:55.267 [2024-05-15 00:59:42.203245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.203275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.216608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f6890 00:22:55.267 [2024-05-15 00:59:42.217924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.267 [2024-05-15 00:59:42.217962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:55.267 [2024-05-15 00:59:42.230992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fbcf0 00:22:55.267 [2024-05-15 00:59:42.232473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.268 [2024-05-15 00:59:42.232503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:55.268 [2024-05-15 00:59:42.243851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e4de8 00:22:55.268 [2024-05-15 00:59:42.244772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.268 [2024-05-15 00:59:42.244809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:55.268 [2024-05-15 00:59:42.257693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f2d80 00:22:55.268 [2024-05-15 00:59:42.258407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.268 [2024-05-15 00:59:42.258437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:55.268 [2024-05-15 00:59:42.273365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e27f0 00:22:55.268 [2024-05-15 00:59:42.275236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.268 [2024-05-15 00:59:42.275266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:55.268 [2024-05-15 00:59:42.287727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ef6a8 00:22:55.268 [2024-05-15 00:59:42.289815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.268 [2024-05-15 00:59:42.289845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:55.268 [2024-05-15 00:59:42.302103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f6458 00:22:55.268 [2024-05-15 00:59:42.304353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.268 [2024-05-15 00:59:42.304383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:55.268 [2024-05-15 00:59:42.311868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f7100 00:22:55.268 [2024-05-15 00:59:42.312777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.268 [2024-05-15 00:59:42.312806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.325983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190fda78 00:22:55.526 [2024-05-15 00:59:42.326899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.326941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.340253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e4578 00:22:55.526 [2024-05-15 00:59:42.341360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.341390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.353254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e7c50 00:22:55.526 [2024-05-15 00:59:42.354354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.354383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.367686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ed0b0 00:22:55.526 [2024-05-15 00:59:42.368956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.368985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.382043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e3060 00:22:55.526 [2024-05-15 00:59:42.383500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.383529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.396407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f4298 00:22:55.526 [2024-05-15 00:59:42.398093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.398124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.410719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e7c50 00:22:55.526 [2024-05-15 00:59:42.412597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.412627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.425062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5ec8 00:22:55.526 [2024-05-15 00:59:42.427114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.427144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.438067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190e5220 00:22:55.526 [2024-05-15 00:59:42.439553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.439582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.451535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190f4f40 00:22:55.526 [2024-05-15 00:59:42.453014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.453048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:55.526 [2024-05-15 00:59:42.465738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190df988 00:22:55.526 [2024-05-15 00:59:42.467455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.526 [2024-05-15 00:59:42.467484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:55.527 [2024-05-15 00:59:42.477242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efcfa0) with pdu=0x2000190ddc00 00:22:55.527 [2024-05-15 00:59:42.478114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.527 [2024-05-15 00:59:42.478143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:55.527 00:22:55.527 Latency(us) 00:22:55.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.527 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:55.527 nvme0n1 : 2.01 18561.00 72.50 0.00 0.00 6882.80 2924.85 16893.72 00:22:55.527 =================================================================================================================== 00:22:55.527 Total : 18561.00 72.50 0.00 0.00 6882.80 2924.85 16893.72 00:22:55.527 0 00:22:55.527 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:55.527 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:55.527 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:55.527 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:55.527 | .driver_specific 00:22:55.527 | .nvme_error 00:22:55.527 | .status_code 00:22:55.527 | .command_transient_transport_error' 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4081991 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4081991 ']' 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4081991 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4081991 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4081991' 00:22:55.785 killing process with pid 4081991 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4081991 00:22:55.785 Received shutdown signal, test time was about 2.000000 seconds 00:22:55.785 00:22:55.785 Latency(us) 00:22:55.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.785 =================================================================================================================== 00:22:55.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:55.785 00:59:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4081991 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4082311 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4082311 /var/tmp/bperf.sock 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 4082311 ']' 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:56.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:56.043 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:56.043 [2024-05-15 00:59:43.065670] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:22:56.044 [2024-05-15 00:59:43.065759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082311 ] 00:22:56.044 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:56.044 Zero copy mechanism will not be used. 00:22:56.044 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.302 [2024-05-15 00:59:43.125101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.302 [2024-05-15 00:59:43.241615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.302 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:56.302 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:22:56.302 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.302 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.566 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:56.566 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.566 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:56.566 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.566 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.566 00:59:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.190 nvme0n1 00:22:57.190 00:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:57.190 00:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.190 00:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:57.190 00:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.190 00:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:57.190 00:59:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.190 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:57.190 Zero copy mechanism will not be used. 00:22:57.190 Running I/O for 2 seconds... 00:22:57.190 [2024-05-15 00:59:44.192912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.190 [2024-05-15 00:59:44.193368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.190 [2024-05-15 00:59:44.193409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.190 [2024-05-15 00:59:44.206704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.190 [2024-05-15 00:59:44.207145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.190 [2024-05-15 00:59:44.207180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.190 [2024-05-15 00:59:44.218913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.190 [2024-05-15 00:59:44.219359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.190 [2024-05-15 00:59:44.219393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.190 [2024-05-15 00:59:44.231882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.190 [2024-05-15 00:59:44.232318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.190 [2024-05-15 00:59:44.232352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.244654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.245104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.245138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.257323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.257584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.257618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.269980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.270403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.270436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.282685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.283118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.283150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.295535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.295966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.296007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.308068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.308474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.308507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.321399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.321819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.321851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.334221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.334636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.334668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.346640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.347064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.347099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.359526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.359928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.359967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.372333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.372755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.372787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.384813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.385097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.385136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.398150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.398540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.398573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.410635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.411048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.411081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.424360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.424592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.424623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.437895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.438320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.438352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.452232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.452639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.452672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.466878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.467304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.467338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.480895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.481342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.481375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.495142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.495549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.495581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.508620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.509025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.509058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.472 [2024-05-15 00:59:44.520974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.472 [2024-05-15 00:59:44.521397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.472 [2024-05-15 00:59:44.521429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.732 [2024-05-15 00:59:44.532214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.732 [2024-05-15 00:59:44.532639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.732 [2024-05-15 00:59:44.532671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.732 [2024-05-15 00:59:44.544874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.732 [2024-05-15 00:59:44.545302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.732 [2024-05-15 00:59:44.545334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.732 [2024-05-15 00:59:44.557001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.732 [2024-05-15 00:59:44.557395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.732 [2024-05-15 00:59:44.557426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.732 [2024-05-15 00:59:44.569671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.732 [2024-05-15 00:59:44.569843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.732 [2024-05-15 00:59:44.569874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.732 [2024-05-15 00:59:44.581579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.732 [2024-05-15 00:59:44.581846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.581877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.593760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.593988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.594020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.606732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.607170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.607210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.620543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.620969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.621001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.633956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.634375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.634409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.647143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.647573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.647605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.659451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.659769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.659800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.672908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.673340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.673373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.686293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.686697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.686731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.700225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.700643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.700674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.714006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.714426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.714458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.728500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.728917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.728958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.743040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.743455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.743488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.756920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.757349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.757381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.769175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.769569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.769600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.733 [2024-05-15 00:59:44.783292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.733 [2024-05-15 00:59:44.783695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.733 [2024-05-15 00:59:44.783726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.796264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.796686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.992 [2024-05-15 00:59:44.796718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.809838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.810260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.992 [2024-05-15 00:59:44.810293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.823653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.824067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.992 [2024-05-15 00:59:44.824100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.836442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.836832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.992 [2024-05-15 00:59:44.836865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.849704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.850105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.992 [2024-05-15 00:59:44.850137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.863551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.863977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.992 [2024-05-15 00:59:44.864009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.876891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.877321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.992 [2024-05-15 00:59:44.877353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.889244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.889667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.992 [2024-05-15 00:59:44.889699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.903615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.904026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.992 [2024-05-15 00:59:44.904065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.992 [2024-05-15 00:59:44.916866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.992 [2024-05-15 00:59:44.917294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:44.917326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:44.930269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:44.930505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:44.930537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:44.943493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:44.944072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:44.944104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:44.955640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:44.956143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:44.956182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:44.968971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:44.969429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:44.969461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:44.981760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:44.982194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:44.982226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:44.994120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:44.994532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:44.994563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:45.006036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:45.006499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:45.006530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:45.019059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:45.019576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:45.019616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:45.032625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:45.033145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:45.033178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.993 [2024-05-15 00:59:45.046121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:57.993 [2024-05-15 00:59:45.046474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.993 [2024-05-15 00:59:45.046505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.250 [2024-05-15 00:59:45.059062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.250 [2024-05-15 00:59:45.059442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.250 [2024-05-15 00:59:45.059475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.250 [2024-05-15 00:59:45.071773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.250 [2024-05-15 00:59:45.072236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.250 [2024-05-15 00:59:45.072268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.250 [2024-05-15 00:59:45.084724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.250 [2024-05-15 00:59:45.085216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.250 [2024-05-15 00:59:45.085248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.250 [2024-05-15 00:59:45.097706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.250 [2024-05-15 00:59:45.098149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.250 [2024-05-15 00:59:45.098182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.250 [2024-05-15 00:59:45.110577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.250 [2024-05-15 00:59:45.110971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.250 [2024-05-15 00:59:45.111004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.250 [2024-05-15 00:59:45.123210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.250 [2024-05-15 00:59:45.123690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.250 [2024-05-15 00:59:45.123722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.250 [2024-05-15 00:59:45.136290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.250 [2024-05-15 00:59:45.136732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.250 [2024-05-15 00:59:45.136764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.250 [2024-05-15 00:59:45.148984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.149318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.149350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.161384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.161815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.161847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.173570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.173951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.173985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.184168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.184705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.184736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.197391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.197958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.197989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.208830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.209213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.209246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.221655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.222096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.222128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.234021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.234452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.234483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.247019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.247482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.247514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.259477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.259913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.259954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.272482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.272953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.272984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.285663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.285998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.286037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.251 [2024-05-15 00:59:45.298248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.251 [2024-05-15 00:59:45.298574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.251 [2024-05-15 00:59:45.298605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.310335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.310875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.310908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.323540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.323903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.323942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.335864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.336334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.336366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.348400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.348784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.348817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.359735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.360113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.360145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.371810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.372265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.372297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.384760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.385143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.385176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.397031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.397516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.397547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.409291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.409763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.409796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.421688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.422128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.422161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.433999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.434423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.434455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.446330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.446762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.446793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.458445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.458836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.458867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.471744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.472265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.472298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.485025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.485450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.485481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.498505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.499039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.499079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.511118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.511567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.511599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.523122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.523635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.523667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.534699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.535220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.535253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.546108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.546600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.546634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.509 [2024-05-15 00:59:45.559844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.509 [2024-05-15 00:59:45.560267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.509 [2024-05-15 00:59:45.560301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.768 [2024-05-15 00:59:45.572951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.768 [2024-05-15 00:59:45.573421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.768 [2024-05-15 00:59:45.573456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.768 [2024-05-15 00:59:45.585200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.585745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.585788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.596993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.597573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.597606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.610149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.610627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.610661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.622252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.622624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.622657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.635031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.635506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.635538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.647394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.647869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.647902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.658807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.659329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.659362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.669323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.669643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.669675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.680526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.680986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.681019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.693446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.693854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.693886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.704722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.705238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.705271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.716636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.717134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.717167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.727977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.728447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.728480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.740164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.740616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.740648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.751996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.752405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.752437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.762816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.763293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.763326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.774840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.775298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.775331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.785902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.786372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.786404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.797359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.797703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.797735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.808465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.808884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.808925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.769 [2024-05-15 00:59:45.820015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:58.769 [2024-05-15 00:59:45.820382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.769 [2024-05-15 00:59:45.820414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.832549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.833023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.833057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.843541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.844011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.844044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.854716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.855138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.855171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.866572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.866985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.867018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.878547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.878972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.879006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.890421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.890841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.890873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.902755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.903222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.903256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.914112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.914683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.914717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.926149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.926607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.926639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.936868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.937276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.937309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.948486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.948863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.948895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.960157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.960585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.960617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.972282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.972616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.972650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.983210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.983717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.983749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:45.995727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:45.996139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:45.996171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:46.008179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.029 [2024-05-15 00:59:46.008746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.029 [2024-05-15 00:59:46.008779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.029 [2024-05-15 00:59:46.020883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.030 [2024-05-15 00:59:46.021336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.030 [2024-05-15 00:59:46.021369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.030 [2024-05-15 00:59:46.033591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.030 [2024-05-15 00:59:46.034079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.030 [2024-05-15 00:59:46.034112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.030 [2024-05-15 00:59:46.046144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.030 [2024-05-15 00:59:46.046606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.030 [2024-05-15 00:59:46.046639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.030 [2024-05-15 00:59:46.059147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.030 [2024-05-15 00:59:46.059563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.030 [2024-05-15 00:59:46.059595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.030 [2024-05-15 00:59:46.071441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.030 [2024-05-15 00:59:46.072024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.030 [2024-05-15 00:59:46.072065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.030 [2024-05-15 00:59:46.084439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.030 [2024-05-15 00:59:46.084875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.030 [2024-05-15 00:59:46.084908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.289 [2024-05-15 00:59:46.096869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.289 [2024-05-15 00:59:46.097401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.289 [2024-05-15 00:59:46.097434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.289 [2024-05-15 00:59:46.109861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.289 [2024-05-15 00:59:46.110251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.289 [2024-05-15 00:59:46.110284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.289 [2024-05-15 00:59:46.122275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.289 [2024-05-15 00:59:46.122623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.289 [2024-05-15 00:59:46.122665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.289 [2024-05-15 00:59:46.133611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.289 [2024-05-15 00:59:46.134168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.289 [2024-05-15 00:59:46.134201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.289 [2024-05-15 00:59:46.146788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.289 [2024-05-15 00:59:46.147305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.289 [2024-05-15 00:59:46.147338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.289 [2024-05-15 00:59:46.159774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.289 [2024-05-15 00:59:46.160342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.289 [2024-05-15 00:59:46.160376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.289 [2024-05-15 00:59:46.171536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.289 [2024-05-15 00:59:46.172103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.289 [2024-05-15 00:59:46.172137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.289 [2024-05-15 00:59:46.183512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1efd480) with pdu=0x2000190fef90 00:22:59.289 [2024-05-15 00:59:46.183904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.289 [2024-05-15 00:59:46.183956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.289 00:22:59.289 Latency(us) 00:22:59.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.289 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:59.289 nvme0n1 : 2.01 2460.80 307.60 0.00 0.00 6484.23 4199.16 15534.46 00:22:59.289 =================================================================================================================== 00:22:59.289 Total : 2460.80 307.60 0.00 0.00 6484.23 4199.16 15534.46 00:22:59.289 0 00:22:59.289 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:59.289 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:59.289 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:59.289 | .driver_specific 00:22:59.289 | .nvme_error 00:22:59.289 | .status_code 00:22:59.289 | .command_transient_transport_error' 00:22:59.289 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4082311 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4082311 ']' 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4082311 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4082311 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4082311' 00:22:59.548 killing process with pid 4082311 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4082311 00:22:59.548 Received shutdown signal, test time was about 2.000000 seconds 00:22:59.548 00:22:59.548 Latency(us) 00:22:59.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.548 =================================================================================================================== 00:22:59.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.548 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4082311 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4081195 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 4081195 ']' 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 4081195 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4081195 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4081195' 00:22:59.806 killing process with pid 4081195 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 4081195 00:22:59.806 [2024-05-15 00:59:46.771552] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:59.806 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 4081195 00:23:00.065 00:23:00.065 real 0m15.652s 00:23:00.065 user 0m31.792s 00:23:00.065 sys 0m3.722s 00:23:00.065 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:00.065 00:59:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:00.065 ************************************ 00:23:00.065 END TEST nvmf_digest_error 00:23:00.065 ************************************ 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:00.065 rmmod nvme_tcp 00:23:00.065 rmmod nvme_fabrics 00:23:00.065 rmmod nvme_keyring 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 4081195 ']' 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 4081195 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 4081195 ']' 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 4081195 00:23:00.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4081195) - No such process 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 4081195 is not found' 00:23:00.065 Process with pid 4081195 is not found 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.065 00:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.603 00:59:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:02.603 00:23:02.603 real 0m35.523s 00:23:02.603 user 1m4.698s 00:23:02.603 sys 0m8.967s 00:23:02.603 00:59:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:02.603 00:59:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:02.603 ************************************ 00:23:02.603 END TEST nvmf_digest 00:23:02.603 ************************************ 00:23:02.603 00:59:49 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:23:02.603 00:59:49 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:23:02.603 00:59:49 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:23:02.603 00:59:49 nvmf_tcp -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:02.603 00:59:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:02.603 00:59:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:02.603 00:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.603 ************************************ 00:23:02.603 START TEST nvmf_bdevperf 00:23:02.603 ************************************ 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:02.603 * Looking for test storage... 00:23:02.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.603 00:59:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:03.989 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:03.989 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.989 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:03.990 Found net devices under 0000:08:00.0: cvl_0_0 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:03.990 Found net devices under 0000:08:00.1: cvl_0_1 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:03.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:23:03.990 00:23:03.990 --- 10.0.0.2 ping statistics --- 00:23:03.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.990 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:23:03.990 00:23:03.990 --- 10.0.0.1 ping statistics --- 00:23:03.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.990 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4084138 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4084138 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 4084138 ']' 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:03.990 00:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:03.990 [2024-05-15 00:59:50.993430] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:03.990 [2024-05-15 00:59:50.993526] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.990 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.250 [2024-05-15 00:59:51.057438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:04.250 [2024-05-15 00:59:51.174513] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.250 [2024-05-15 00:59:51.174570] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.250 [2024-05-15 00:59:51.174585] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.250 [2024-05-15 00:59:51.174598] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.250 [2024-05-15 00:59:51.174610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.250 [2024-05-15 00:59:51.174697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.250 [2024-05-15 00:59:51.174987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:04.250 [2024-05-15 00:59:51.175020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.250 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:04.250 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:23:04.250 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:04.250 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.250 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:04.250 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.250 00:59:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.250 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.250 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:04.509 [2024-05-15 00:59:51.311725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:04.509 Malloc0 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:04.509 [2024-05-15 00:59:51.374012] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:04.509 [2024-05-15 00:59:51.374265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.509 { 00:23:04.509 "params": { 00:23:04.509 "name": "Nvme$subsystem", 00:23:04.509 "trtype": "$TEST_TRANSPORT", 00:23:04.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.509 "adrfam": "ipv4", 00:23:04.509 "trsvcid": "$NVMF_PORT", 00:23:04.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.509 "hdgst": ${hdgst:-false}, 00:23:04.509 "ddgst": ${ddgst:-false} 00:23:04.509 }, 00:23:04.509 "method": "bdev_nvme_attach_controller" 00:23:04.509 } 00:23:04.509 EOF 00:23:04.509 )") 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:04.509 00:59:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:04.509 "params": { 00:23:04.509 "name": "Nvme1", 00:23:04.509 "trtype": "tcp", 00:23:04.509 "traddr": "10.0.0.2", 00:23:04.509 "adrfam": "ipv4", 00:23:04.509 "trsvcid": "4420", 00:23:04.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.509 "hdgst": false, 00:23:04.509 "ddgst": false 00:23:04.509 }, 00:23:04.509 "method": "bdev_nvme_attach_controller" 00:23:04.509 }' 00:23:04.509 [2024-05-15 00:59:51.423506] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:04.509 [2024-05-15 00:59:51.423597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084250 ] 00:23:04.509 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.509 [2024-05-15 00:59:51.484430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.768 [2024-05-15 00:59:51.604457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.768 Running I/O for 1 seconds... 00:23:06.143 00:23:06.143 Latency(us) 00:23:06.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.143 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:06.143 Verification LBA range: start 0x0 length 0x4000 00:23:06.143 Nvme1n1 : 1.01 7154.32 27.95 0.00 0.00 17806.42 3835.07 17670.45 00:23:06.143 =================================================================================================================== 00:23:06.143 Total : 7154.32 27.95 0.00 0.00 17806.42 3835.07 17670.45 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4084361 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.143 { 00:23:06.143 "params": { 00:23:06.143 "name": "Nvme$subsystem", 00:23:06.143 "trtype": "$TEST_TRANSPORT", 00:23:06.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.143 "adrfam": "ipv4", 00:23:06.143 "trsvcid": "$NVMF_PORT", 00:23:06.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.143 "hdgst": ${hdgst:-false}, 00:23:06.143 "ddgst": ${ddgst:-false} 00:23:06.143 }, 00:23:06.143 "method": "bdev_nvme_attach_controller" 00:23:06.143 } 00:23:06.143 EOF 00:23:06.143 )") 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:06.143 00:59:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:06.143 "params": { 00:23:06.143 "name": "Nvme1", 00:23:06.143 "trtype": "tcp", 00:23:06.143 "traddr": "10.0.0.2", 00:23:06.143 "adrfam": "ipv4", 00:23:06.143 "trsvcid": "4420", 00:23:06.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.143 "hdgst": false, 00:23:06.143 "ddgst": false 00:23:06.143 }, 00:23:06.143 "method": "bdev_nvme_attach_controller" 00:23:06.143 }' 00:23:06.143 [2024-05-15 00:59:53.053180] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:06.143 [2024-05-15 00:59:53.053279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084361 ] 00:23:06.143 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.143 [2024-05-15 00:59:53.114129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.401 [2024-05-15 00:59:53.233512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.401 Running I/O for 15 seconds... 00:23:09.691 00:59:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4084138 00:23:09.691 00:59:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:09.691 [2024-05-15 00:59:56.020687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.020742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.020775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.020793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.020810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.020826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.020845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.020860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.020878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.020893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.020910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.020925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.020951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.020967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.020992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.691 [2024-05-15 00:59:56.021429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.691 [2024-05-15 00:59:56.021792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.691 [2024-05-15 00:59:56.021807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.021823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.021838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.021859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.021874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.021891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.021906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.021922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.021946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.021964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.021979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.021996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.692 [2024-05-15 00:59:56.022293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.692 [2024-05-15 00:59:56.022893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.692 [2024-05-15 00:59:56.022909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.022924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.022949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.022966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.022985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.023969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.023987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.024002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.024019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.024041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.024058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.024074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.024091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.024106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.024123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.024138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.024155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.024170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.024186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.024201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.693 [2024-05-15 00:59:56.024218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.693 [2024-05-15 00:59:56.024233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.694 [2024-05-15 00:59:56.024369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.694 [2024-05-15 00:59:56.024855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.024871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe67a00 is same with the state(5) to be set 00:23:09.694 [2024-05-15 00:59:56.024896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.694 [2024-05-15 00:59:56.024909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.694 [2024-05-15 00:59:56.024922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21880 len:8 PRP1 0x0 PRP2 0x0 00:23:09.694 [2024-05-15 00:59:56.024944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.025026] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe67a00 was disconnected and freed. reset controller. 00:23:09.694 [2024-05-15 00:59:56.025106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.694 [2024-05-15 00:59:56.025128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.025144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.694 [2024-05-15 00:59:56.025159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.025174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.694 [2024-05-15 00:59:56.025188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.025203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.694 [2024-05-15 00:59:56.025217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.694 [2024-05-15 00:59:56.025231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.694 [2024-05-15 00:59:56.029375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.694 [2024-05-15 00:59:56.029414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.694 [2024-05-15 00:59:56.030204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.694 [2024-05-15 00:59:56.030485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.694 [2024-05-15 00:59:56.030527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.694 [2024-05-15 00:59:56.030547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.694 [2024-05-15 00:59:56.030827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.694 [2024-05-15 00:59:56.031113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.694 [2024-05-15 00:59:56.031138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.694 [2024-05-15 00:59:56.031157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.694 [2024-05-15 00:59:56.035251] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.694 [2024-05-15 00:59:56.044118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.694 [2024-05-15 00:59:56.044654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.694 [2024-05-15 00:59:56.045000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.694 [2024-05-15 00:59:56.045030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.694 [2024-05-15 00:59:56.045048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.694 [2024-05-15 00:59:56.045320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.694 [2024-05-15 00:59:56.045591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.694 [2024-05-15 00:59:56.045613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.694 [2024-05-15 00:59:56.045628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.694 [2024-05-15 00:59:56.049718] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.694 [2024-05-15 00:59:56.058630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.694 [2024-05-15 00:59:56.059136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.694 [2024-05-15 00:59:56.059491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.694 [2024-05-15 00:59:56.059543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.694 [2024-05-15 00:59:56.059562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.694 [2024-05-15 00:59:56.059834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.694 [2024-05-15 00:59:56.060116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.694 [2024-05-15 00:59:56.060139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.694 [2024-05-15 00:59:56.060154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.694 [2024-05-15 00:59:56.064303] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.694 [2024-05-15 00:59:56.073065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.694 [2024-05-15 00:59:56.073568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.694 [2024-05-15 00:59:56.073883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.073943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.695 [2024-05-15 00:59:56.073963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.695 [2024-05-15 00:59:56.074242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.695 [2024-05-15 00:59:56.074512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.695 [2024-05-15 00:59:56.074533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.695 [2024-05-15 00:59:56.074548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.695 [2024-05-15 00:59:56.078674] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.695 [2024-05-15 00:59:56.087601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.695 [2024-05-15 00:59:56.088209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.088490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.088535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.695 [2024-05-15 00:59:56.088553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.695 [2024-05-15 00:59:56.088825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.695 [2024-05-15 00:59:56.089108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.695 [2024-05-15 00:59:56.089131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.695 [2024-05-15 00:59:56.089146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.695 [2024-05-15 00:59:56.093279] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.695 [2024-05-15 00:59:56.102016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.695 [2024-05-15 00:59:56.102649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.102965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.103034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.695 [2024-05-15 00:59:56.103052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.695 [2024-05-15 00:59:56.103325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.695 [2024-05-15 00:59:56.103594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.695 [2024-05-15 00:59:56.103616] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.695 [2024-05-15 00:59:56.103631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.695 [2024-05-15 00:59:56.107753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.695 [2024-05-15 00:59:56.116495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.695 [2024-05-15 00:59:56.117123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.117428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.117473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.695 [2024-05-15 00:59:56.117493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.695 [2024-05-15 00:59:56.117764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.695 [2024-05-15 00:59:56.118048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.695 [2024-05-15 00:59:56.118071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.695 [2024-05-15 00:59:56.118087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.695 [2024-05-15 00:59:56.122210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.695 [2024-05-15 00:59:56.130930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.695 [2024-05-15 00:59:56.131575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.131856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.131904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.695 [2024-05-15 00:59:56.131922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.695 [2024-05-15 00:59:56.132205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.695 [2024-05-15 00:59:56.132475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.695 [2024-05-15 00:59:56.132497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.695 [2024-05-15 00:59:56.132513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.695 [2024-05-15 00:59:56.136670] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.695 [2024-05-15 00:59:56.145417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.695 [2024-05-15 00:59:56.146071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.146327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.695 [2024-05-15 00:59:56.146376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.695 [2024-05-15 00:59:56.146393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.695 [2024-05-15 00:59:56.146666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.695 [2024-05-15 00:59:56.146955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.695 [2024-05-15 00:59:56.146977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.695 [2024-05-15 00:59:56.146992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.695 [2024-05-15 00:59:56.151108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.695 [2024-05-15 00:59:56.159802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.696 [2024-05-15 00:59:56.160360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.696 [2024-05-15 00:59:56.160647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.696 [2024-05-15 00:59:56.160699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.696 [2024-05-15 00:59:56.160721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.696 [2024-05-15 00:59:56.161005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.696 [2024-05-15 00:59:56.161274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.696 [2024-05-15 00:59:56.161296] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.696 [2024-05-15 00:59:56.161311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.696 [2024-05-15 00:59:56.165466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.696 [2024-05-15 00:59:56.174310] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.696 [2024-05-15 00:59:56.174924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.696 [2024-05-15 00:59:56.175275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.696 [2024-05-15 00:59:56.175301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.696 [2024-05-15 00:59:56.175318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.696 [2024-05-15 00:59:56.175592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.696 [2024-05-15 00:59:56.175861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.696 [2024-05-15 00:59:56.175882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.696 [2024-05-15 00:59:56.175897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.696 [2024-05-15 00:59:56.180010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.696 [2024-05-15 00:59:56.188745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.696 [2024-05-15 00:59:56.189315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.696 [2024-05-15 00:59:56.189513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.696 [2024-05-15 00:59:56.189538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.189555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.189820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.190100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.190122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.190137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.194296] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.697 [2024-05-15 00:59:56.203309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.697 [2024-05-15 00:59:56.203882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.204218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.204273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.204292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.204572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.204841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.204863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.204878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.209038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.697 [2024-05-15 00:59:56.217755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.697 [2024-05-15 00:59:56.218368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.218678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.218734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.218751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.219045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.219315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.219337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.219352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.223517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.697 [2024-05-15 00:59:56.232276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.697 [2024-05-15 00:59:56.232903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.233307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.233360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.233385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.233657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.233926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.233962] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.233977] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.238095] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.697 [2024-05-15 00:59:56.246850] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.697 [2024-05-15 00:59:56.247397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.247573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.247600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.247617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.247884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.248170] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.248192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.248207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.252356] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.697 [2024-05-15 00:59:56.261401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.697 [2024-05-15 00:59:56.261980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.262317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.262367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.262385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.262658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.262927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.262969] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.262984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.267129] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.697 [2024-05-15 00:59:56.275865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.697 [2024-05-15 00:59:56.276454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.276790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.276841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.276859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.277145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.277416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.277438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.277453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.281545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.697 [2024-05-15 00:59:56.290467] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.697 [2024-05-15 00:59:56.291003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.291334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.291379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.291397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.291669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.291951] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.291980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.291995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.296093] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.697 [2024-05-15 00:59:56.305008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.697 [2024-05-15 00:59:56.305533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.305698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.305729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.305747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.306030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.306307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.306328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.306343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.310432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.697 [2024-05-15 00:59:56.319625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.697 [2024-05-15 00:59:56.320249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.320525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.697 [2024-05-15 00:59:56.320574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.697 [2024-05-15 00:59:56.320592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.697 [2024-05-15 00:59:56.320864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.697 [2024-05-15 00:59:56.321147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.697 [2024-05-15 00:59:56.321169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.697 [2024-05-15 00:59:56.321185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.697 [2024-05-15 00:59:56.325303] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.334057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.334642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.335096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.335136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.335155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.335427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.335697] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.698 [2024-05-15 00:59:56.335719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.698 [2024-05-15 00:59:56.335750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.698 [2024-05-15 00:59:56.339895] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.348641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.349250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.349569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.349620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.349638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.349910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.350192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.698 [2024-05-15 00:59:56.350215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.698 [2024-05-15 00:59:56.350229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.698 [2024-05-15 00:59:56.354353] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.363092] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.363720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.364094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.364148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.364166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.364438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.364708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.698 [2024-05-15 00:59:56.364730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.698 [2024-05-15 00:59:56.364745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.698 [2024-05-15 00:59:56.368880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.377610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.378244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.378494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.378541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.378558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.378831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.379111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.698 [2024-05-15 00:59:56.379135] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.698 [2024-05-15 00:59:56.379151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.698 [2024-05-15 00:59:56.383246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.392172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.392716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.393107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.393148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.393167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.393439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.393709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.698 [2024-05-15 00:59:56.393731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.698 [2024-05-15 00:59:56.393746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.698 [2024-05-15 00:59:56.397834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.406732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.407286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.407571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.407597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.407614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.407881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.408157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.698 [2024-05-15 00:59:56.408180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.698 [2024-05-15 00:59:56.408195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.698 [2024-05-15 00:59:56.412296] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.421246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.421706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.421875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.421901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.421917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.422192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.422462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.698 [2024-05-15 00:59:56.422484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.698 [2024-05-15 00:59:56.422498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.698 [2024-05-15 00:59:56.426585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.435737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.436331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.436657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.436706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.436724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.437013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.437284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.698 [2024-05-15 00:59:56.437306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.698 [2024-05-15 00:59:56.437321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.698 [2024-05-15 00:59:56.441425] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.450387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.450958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.451179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.451207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.451225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.451497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.451767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.698 [2024-05-15 00:59:56.451789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.698 [2024-05-15 00:59:56.451804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.698 [2024-05-15 00:59:56.455958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.698 [2024-05-15 00:59:56.464894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.698 [2024-05-15 00:59:56.465505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.465856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.698 [2024-05-15 00:59:56.465906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.698 [2024-05-15 00:59:56.465924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.698 [2024-05-15 00:59:56.466208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.698 [2024-05-15 00:59:56.466477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.466499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.466515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.470634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.479332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.479894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.480189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.480243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.699 [2024-05-15 00:59:56.480265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.699 [2024-05-15 00:59:56.480540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.699 [2024-05-15 00:59:56.480810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.480832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.480847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.484955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.493848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.494491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.494769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.494820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.699 [2024-05-15 00:59:56.494838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.699 [2024-05-15 00:59:56.495123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.699 [2024-05-15 00:59:56.495395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.495417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.495431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.499507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.508430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.508980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.509288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.509329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.699 [2024-05-15 00:59:56.509348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.699 [2024-05-15 00:59:56.509626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.699 [2024-05-15 00:59:56.509895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.509916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.509943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.514089] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.523078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.523652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.523841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.523869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.699 [2024-05-15 00:59:56.523892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.699 [2024-05-15 00:59:56.524179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.699 [2024-05-15 00:59:56.524450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.524472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.524486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.528613] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.537560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.538115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.538455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.538513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.699 [2024-05-15 00:59:56.538531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.699 [2024-05-15 00:59:56.538810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.699 [2024-05-15 00:59:56.539093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.539116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.539130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.543261] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.552038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.552676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.552869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.552898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.699 [2024-05-15 00:59:56.552916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.699 [2024-05-15 00:59:56.553198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.699 [2024-05-15 00:59:56.553468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.553490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.553505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.557641] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.566638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.567199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.567481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.567510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.699 [2024-05-15 00:59:56.567528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.699 [2024-05-15 00:59:56.567826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.699 [2024-05-15 00:59:56.568114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.568137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.568152] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.572314] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.581138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.581764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.582100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.582155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.699 [2024-05-15 00:59:56.582174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.699 [2024-05-15 00:59:56.582446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.699 [2024-05-15 00:59:56.582716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.582738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.582752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.586904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.595624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.596278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.596524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.596574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.699 [2024-05-15 00:59:56.596592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.699 [2024-05-15 00:59:56.596864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.699 [2024-05-15 00:59:56.597151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.699 [2024-05-15 00:59:56.597174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.699 [2024-05-15 00:59:56.597189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.699 [2024-05-15 00:59:56.601292] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.699 [2024-05-15 00:59:56.610221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.699 [2024-05-15 00:59:56.610709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.611046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.699 [2024-05-15 00:59:56.611087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.611106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.611385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.611661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.611683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.611698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.700 [2024-05-15 00:59:56.615820] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.700 [2024-05-15 00:59:56.624754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.700 [2024-05-15 00:59:56.625349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.625650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.625679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.625697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.625982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.626259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.626281] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.626296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.700 [2024-05-15 00:59:56.630394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.700 [2024-05-15 00:59:56.639341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.700 [2024-05-15 00:59:56.639976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.640282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.640332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.640349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.640622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.640892] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.640913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.640928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.700 [2024-05-15 00:59:56.645045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.700 [2024-05-15 00:59:56.653732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.700 [2024-05-15 00:59:56.654352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.654703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.654746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.654763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.655041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.655311] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.655340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.655356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.700 [2024-05-15 00:59:56.659456] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.700 [2024-05-15 00:59:56.668182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.700 [2024-05-15 00:59:56.668648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.669103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.669145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.669163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.669436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.669710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.669732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.669747] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.700 [2024-05-15 00:59:56.673865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.700 [2024-05-15 00:59:56.682564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.700 [2024-05-15 00:59:56.683195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.683470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.683518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.683536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.683808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.684091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.684115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.684130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.700 [2024-05-15 00:59:56.688239] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.700 [2024-05-15 00:59:56.697162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.700 [2024-05-15 00:59:56.697760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.697940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.697971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.697990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.698262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.698533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.698555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.698576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.700 [2024-05-15 00:59:56.702671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.700 [2024-05-15 00:59:56.711597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.700 [2024-05-15 00:59:56.712195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.712526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.712576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.712594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.712866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.713144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.713166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.713181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.700 [2024-05-15 00:59:56.717258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.700 [2024-05-15 00:59:56.726168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.700 [2024-05-15 00:59:56.726724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.727110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.727164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.727183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.727455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.727724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.727746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.727760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.700 [2024-05-15 00:59:56.731893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.700 [2024-05-15 00:59:56.740642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.700 [2024-05-15 00:59:56.741184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.741511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.700 [2024-05-15 00:59:56.741565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.700 [2024-05-15 00:59:56.741583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.700 [2024-05-15 00:59:56.741850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.700 [2024-05-15 00:59:56.742130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.700 [2024-05-15 00:59:56.742153] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.700 [2024-05-15 00:59:56.742167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.960 [2024-05-15 00:59:56.746400] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.960 [2024-05-15 00:59:56.755121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.960 [2024-05-15 00:59:56.755749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.960 [2024-05-15 00:59:56.756022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.960 [2024-05-15 00:59:56.756052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.960 [2024-05-15 00:59:56.756070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.960 [2024-05-15 00:59:56.756343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.960 [2024-05-15 00:59:56.756619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.960 [2024-05-15 00:59:56.756640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.960 [2024-05-15 00:59:56.756655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.960 [2024-05-15 00:59:56.760793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.960 [2024-05-15 00:59:56.769583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.960 [2024-05-15 00:59:56.770191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.960 [2024-05-15 00:59:56.770461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.960 [2024-05-15 00:59:56.770541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.960 [2024-05-15 00:59:56.770559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.960 [2024-05-15 00:59:56.770832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.960 [2024-05-15 00:59:56.771114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.960 [2024-05-15 00:59:56.771137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.960 [2024-05-15 00:59:56.771152] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.960 [2024-05-15 00:59:56.775244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.960 [2024-05-15 00:59:56.784158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.960 [2024-05-15 00:59:56.784810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.960 [2024-05-15 00:59:56.784990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.960 [2024-05-15 00:59:56.785022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.960 [2024-05-15 00:59:56.785040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.960 [2024-05-15 00:59:56.785312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.960 [2024-05-15 00:59:56.785582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.960 [2024-05-15 00:59:56.785604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.960 [2024-05-15 00:59:56.785618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.960 [2024-05-15 00:59:56.789700] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.961 [2024-05-15 00:59:56.798635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.961 [2024-05-15 00:59:56.799261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.799540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.799569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.961 [2024-05-15 00:59:56.799587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.961 [2024-05-15 00:59:56.799865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.961 [2024-05-15 00:59:56.800145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.961 [2024-05-15 00:59:56.800167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.961 [2024-05-15 00:59:56.800182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.961 [2024-05-15 00:59:56.804282] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.961 [2024-05-15 00:59:56.813249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.961 [2024-05-15 00:59:56.813846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.814062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.814104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.961 [2024-05-15 00:59:56.814123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.961 [2024-05-15 00:59:56.814395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.961 [2024-05-15 00:59:56.814665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.961 [2024-05-15 00:59:56.814687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.961 [2024-05-15 00:59:56.814701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.961 [2024-05-15 00:59:56.818813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.961 [2024-05-15 00:59:56.827749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.961 [2024-05-15 00:59:56.828396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.828675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.828703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.961 [2024-05-15 00:59:56.828720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.961 [2024-05-15 00:59:56.829006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.961 [2024-05-15 00:59:56.829277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.961 [2024-05-15 00:59:56.829299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.961 [2024-05-15 00:59:56.829314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.961 [2024-05-15 00:59:56.833415] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.961 [2024-05-15 00:59:56.842375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.961 [2024-05-15 00:59:56.843013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.843342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.843393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.961 [2024-05-15 00:59:56.843411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.961 [2024-05-15 00:59:56.843683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.961 [2024-05-15 00:59:56.843965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.961 [2024-05-15 00:59:56.843988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.961 [2024-05-15 00:59:56.844002] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.961 [2024-05-15 00:59:56.848082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.961 [2024-05-15 00:59:56.857026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.961 [2024-05-15 00:59:56.857572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.857830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.857859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.961 [2024-05-15 00:59:56.857876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.961 [2024-05-15 00:59:56.858160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.961 [2024-05-15 00:59:56.858432] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.961 [2024-05-15 00:59:56.858453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.961 [2024-05-15 00:59:56.858468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.961 [2024-05-15 00:59:56.862554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.961 [2024-05-15 00:59:56.871499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.961 [2024-05-15 00:59:56.872140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.872448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.872496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.961 [2024-05-15 00:59:56.872513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.961 [2024-05-15 00:59:56.872786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.961 [2024-05-15 00:59:56.873068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.961 [2024-05-15 00:59:56.873091] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.961 [2024-05-15 00:59:56.873106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.961 [2024-05-15 00:59:56.877208] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.961 [2024-05-15 00:59:56.885886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.961 [2024-05-15 00:59:56.886364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.886590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.886654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.961 [2024-05-15 00:59:56.886673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.961 [2024-05-15 00:59:56.886960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.961 [2024-05-15 00:59:56.887237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.961 [2024-05-15 00:59:56.887259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.961 [2024-05-15 00:59:56.887274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.961 [2024-05-15 00:59:56.891368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.961 [2024-05-15 00:59:56.900342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.961 [2024-05-15 00:59:56.900857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.901027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.901055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.961 [2024-05-15 00:59:56.901072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.961 [2024-05-15 00:59:56.901337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.961 [2024-05-15 00:59:56.901606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.961 [2024-05-15 00:59:56.901627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.961 [2024-05-15 00:59:56.901642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.961 [2024-05-15 00:59:56.905771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.961 [2024-05-15 00:59:56.914984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.961 [2024-05-15 00:59:56.915561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.915844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.961 [2024-05-15 00:59:56.915893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.961 [2024-05-15 00:59:56.915911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.961 [2024-05-15 00:59:56.916194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.962 [2024-05-15 00:59:56.916466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.962 [2024-05-15 00:59:56.916487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.962 [2024-05-15 00:59:56.916502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.962 [2024-05-15 00:59:56.920600] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.962 [2024-05-15 00:59:56.929591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.962 [2024-05-15 00:59:56.930199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.930499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.930555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.962 [2024-05-15 00:59:56.930579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.962 [2024-05-15 00:59:56.930852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.962 [2024-05-15 00:59:56.931135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.962 [2024-05-15 00:59:56.931158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.962 [2024-05-15 00:59:56.931173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.962 [2024-05-15 00:59:56.935249] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.962 [2024-05-15 00:59:56.944173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.962 [2024-05-15 00:59:56.944818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.945098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.945149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.962 [2024-05-15 00:59:56.945167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.962 [2024-05-15 00:59:56.945439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.962 [2024-05-15 00:59:56.945709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.962 [2024-05-15 00:59:56.945731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.962 [2024-05-15 00:59:56.945745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.962 [2024-05-15 00:59:56.949865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.962 [2024-05-15 00:59:56.958608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.962 [2024-05-15 00:59:56.959070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.959342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.959394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.962 [2024-05-15 00:59:56.959412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.962 [2024-05-15 00:59:56.959684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.962 [2024-05-15 00:59:56.959971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.962 [2024-05-15 00:59:56.959993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.962 [2024-05-15 00:59:56.960008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.962 [2024-05-15 00:59:56.964100] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.962 [2024-05-15 00:59:56.973011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.962 [2024-05-15 00:59:56.973610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.973886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.973943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.962 [2024-05-15 00:59:56.973964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.962 [2024-05-15 00:59:56.974243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.962 [2024-05-15 00:59:56.974514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.962 [2024-05-15 00:59:56.974535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.962 [2024-05-15 00:59:56.974550] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.962 [2024-05-15 00:59:56.978634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.962 [2024-05-15 00:59:56.987562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.962 [2024-05-15 00:59:56.988099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.988434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:56.988481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.962 [2024-05-15 00:59:56.988498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.962 [2024-05-15 00:59:56.988765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.962 [2024-05-15 00:59:56.989044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.962 [2024-05-15 00:59:56.989066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.962 [2024-05-15 00:59:56.989081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.962 [2024-05-15 00:59:56.993219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.962 [2024-05-15 00:59:57.001994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.962 [2024-05-15 00:59:57.002598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:57.002842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.962 [2024-05-15 00:59:57.002889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:09.962 [2024-05-15 00:59:57.002907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:09.962 [2024-05-15 00:59:57.003197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:09.962 [2024-05-15 00:59:57.003468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.962 [2024-05-15 00:59:57.003490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.962 [2024-05-15 00:59:57.003505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.962 [2024-05-15 00:59:57.007597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.962 [2024-05-15 00:59:57.016622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.222 [2024-05-15 00:59:57.017282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.017532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.017563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.222 [2024-05-15 00:59:57.017581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.222 [2024-05-15 00:59:57.017850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.222 [2024-05-15 00:59:57.018136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.222 [2024-05-15 00:59:57.018159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.222 [2024-05-15 00:59:57.018174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.222 [2024-05-15 00:59:57.022270] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.222 [2024-05-15 00:59:57.031244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.222 [2024-05-15 00:59:57.031829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.032139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.032189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.222 [2024-05-15 00:59:57.032207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.222 [2024-05-15 00:59:57.032486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.222 [2024-05-15 00:59:57.032755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.222 [2024-05-15 00:59:57.032777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.222 [2024-05-15 00:59:57.032792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.222 [2024-05-15 00:59:57.036878] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.222 [2024-05-15 00:59:57.045787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.222 [2024-05-15 00:59:57.046346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.046631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.046681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.222 [2024-05-15 00:59:57.046698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.222 [2024-05-15 00:59:57.046976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.222 [2024-05-15 00:59:57.047257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.222 [2024-05-15 00:59:57.047278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.222 [2024-05-15 00:59:57.047293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.222 [2024-05-15 00:59:57.051423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.222 [2024-05-15 00:59:57.060423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.222 [2024-05-15 00:59:57.061107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.061447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.061497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.222 [2024-05-15 00:59:57.061515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.222 [2024-05-15 00:59:57.061788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.222 [2024-05-15 00:59:57.062068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.222 [2024-05-15 00:59:57.062098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.222 [2024-05-15 00:59:57.062114] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.222 [2024-05-15 00:59:57.066244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.222 [2024-05-15 00:59:57.074948] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.222 [2024-05-15 00:59:57.075436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.075751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.075800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.222 [2024-05-15 00:59:57.075818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.222 [2024-05-15 00:59:57.076108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.222 [2024-05-15 00:59:57.076385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.222 [2024-05-15 00:59:57.076407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.222 [2024-05-15 00:59:57.076422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.222 [2024-05-15 00:59:57.080628] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.222 [2024-05-15 00:59:57.089333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.222 [2024-05-15 00:59:57.089965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.090235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.090289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.222 [2024-05-15 00:59:57.090307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.222 [2024-05-15 00:59:57.090579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.222 [2024-05-15 00:59:57.090848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.222 [2024-05-15 00:59:57.090870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.222 [2024-05-15 00:59:57.090884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.222 [2024-05-15 00:59:57.094998] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.222 [2024-05-15 00:59:57.103733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.222 [2024-05-15 00:59:57.104344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.222 [2024-05-15 00:59:57.104677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.104725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.104743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.105033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.223 [2024-05-15 00:59:57.105309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.223 [2024-05-15 00:59:57.105331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.223 [2024-05-15 00:59:57.105352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.223 [2024-05-15 00:59:57.109445] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.223 [2024-05-15 00:59:57.118222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.223 [2024-05-15 00:59:57.118753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.119027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.119068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.119088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.119360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.223 [2024-05-15 00:59:57.119629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.223 [2024-05-15 00:59:57.119650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.223 [2024-05-15 00:59:57.119665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.223 [2024-05-15 00:59:57.123777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.223 [2024-05-15 00:59:57.132710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.223 [2024-05-15 00:59:57.133289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.133629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.133676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.133695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.133979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.223 [2024-05-15 00:59:57.134250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.223 [2024-05-15 00:59:57.134272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.223 [2024-05-15 00:59:57.134287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.223 [2024-05-15 00:59:57.138459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.223 [2024-05-15 00:59:57.147159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.223 [2024-05-15 00:59:57.147756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.148115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.148174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.148194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.148466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.223 [2024-05-15 00:59:57.148735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.223 [2024-05-15 00:59:57.148757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.223 [2024-05-15 00:59:57.148771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.223 [2024-05-15 00:59:57.152920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.223 [2024-05-15 00:59:57.161640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.223 [2024-05-15 00:59:57.162280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.162540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.162606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.162639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.162911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.223 [2024-05-15 00:59:57.163192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.223 [2024-05-15 00:59:57.163214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.223 [2024-05-15 00:59:57.163229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.223 [2024-05-15 00:59:57.167346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.223 [2024-05-15 00:59:57.176055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.223 [2024-05-15 00:59:57.176698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.176890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.176920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.176948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.177229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.223 [2024-05-15 00:59:57.177499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.223 [2024-05-15 00:59:57.177520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.223 [2024-05-15 00:59:57.177535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.223 [2024-05-15 00:59:57.181629] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.223 [2024-05-15 00:59:57.190572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.223 [2024-05-15 00:59:57.191156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.191502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.191547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.191565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.191838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.223 [2024-05-15 00:59:57.192120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.223 [2024-05-15 00:59:57.192143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.223 [2024-05-15 00:59:57.192158] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.223 [2024-05-15 00:59:57.196266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.223 [2024-05-15 00:59:57.204980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.223 [2024-05-15 00:59:57.205593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.205969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.206022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.206040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.206312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.223 [2024-05-15 00:59:57.206594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.223 [2024-05-15 00:59:57.206615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.223 [2024-05-15 00:59:57.206630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.223 [2024-05-15 00:59:57.210737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.223 [2024-05-15 00:59:57.219479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.223 [2024-05-15 00:59:57.220009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.220293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.220343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.220362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.220634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.223 [2024-05-15 00:59:57.220904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.223 [2024-05-15 00:59:57.220926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.223 [2024-05-15 00:59:57.220963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.223 [2024-05-15 00:59:57.225036] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.223 [2024-05-15 00:59:57.233959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.223 [2024-05-15 00:59:57.234501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.234761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.223 [2024-05-15 00:59:57.234810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.223 [2024-05-15 00:59:57.234829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.223 [2024-05-15 00:59:57.235115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.224 [2024-05-15 00:59:57.235386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.224 [2024-05-15 00:59:57.235407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.224 [2024-05-15 00:59:57.235422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.224 [2024-05-15 00:59:57.239528] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.224 [2024-05-15 00:59:57.248447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.224 [2024-05-15 00:59:57.249022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.224 [2024-05-15 00:59:57.249199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.224 [2024-05-15 00:59:57.249229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.224 [2024-05-15 00:59:57.249247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.224 [2024-05-15 00:59:57.249521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.224 [2024-05-15 00:59:57.249791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.224 [2024-05-15 00:59:57.249813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.224 [2024-05-15 00:59:57.249827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.224 [2024-05-15 00:59:57.253955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.224 [2024-05-15 00:59:57.262882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.224 [2024-05-15 00:59:57.263445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.224 [2024-05-15 00:59:57.263681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.224 [2024-05-15 00:59:57.263710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.224 [2024-05-15 00:59:57.263728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.224 [2024-05-15 00:59:57.264014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.224 [2024-05-15 00:59:57.264284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.224 [2024-05-15 00:59:57.264306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.224 [2024-05-15 00:59:57.264321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.224 [2024-05-15 00:59:57.268419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.224 [2024-05-15 00:59:57.277439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.224 [2024-05-15 00:59:57.278120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.224 [2024-05-15 00:59:57.278337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.224 [2024-05-15 00:59:57.278390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.224 [2024-05-15 00:59:57.278408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.224 [2024-05-15 00:59:57.278686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.483 [2024-05-15 00:59:57.279035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.483 [2024-05-15 00:59:57.279072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.483 [2024-05-15 00:59:57.279088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.483 [2024-05-15 00:59:57.283214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.484 [2024-05-15 00:59:57.291973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.484 [2024-05-15 00:59:57.292584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.292974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.293003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.484 [2024-05-15 00:59:57.293027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.484 [2024-05-15 00:59:57.293301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.484 [2024-05-15 00:59:57.293570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.484 [2024-05-15 00:59:57.293592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.484 [2024-05-15 00:59:57.293606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.484 [2024-05-15 00:59:57.297706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.484 [2024-05-15 00:59:57.306424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.484 [2024-05-15 00:59:57.306840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.307180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.307233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.484 [2024-05-15 00:59:57.307252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.484 [2024-05-15 00:59:57.307524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.484 [2024-05-15 00:59:57.307794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.484 [2024-05-15 00:59:57.307815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.484 [2024-05-15 00:59:57.307829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.484 [2024-05-15 00:59:57.312008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.484 [2024-05-15 00:59:57.321001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.484 [2024-05-15 00:59:57.321565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.321841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.321890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.484 [2024-05-15 00:59:57.321908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.484 [2024-05-15 00:59:57.322193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.484 [2024-05-15 00:59:57.322463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.484 [2024-05-15 00:59:57.322485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.484 [2024-05-15 00:59:57.322500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.484 [2024-05-15 00:59:57.326642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.484 [2024-05-15 00:59:57.335455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.484 [2024-05-15 00:59:57.335945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.336264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.336304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.484 [2024-05-15 00:59:57.336324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.484 [2024-05-15 00:59:57.336606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.484 [2024-05-15 00:59:57.336875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.484 [2024-05-15 00:59:57.336897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.484 [2024-05-15 00:59:57.336911] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.484 [2024-05-15 00:59:57.341077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.484 [2024-05-15 00:59:57.350107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.484 [2024-05-15 00:59:57.350763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.351062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.351119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.484 [2024-05-15 00:59:57.351136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.484 [2024-05-15 00:59:57.351409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.484 [2024-05-15 00:59:57.351677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.484 [2024-05-15 00:59:57.351699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.484 [2024-05-15 00:59:57.351713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.484 [2024-05-15 00:59:57.355846] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.484 [2024-05-15 00:59:57.364577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.484 [2024-05-15 00:59:57.365136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.365383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.365431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.484 [2024-05-15 00:59:57.365449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.484 [2024-05-15 00:59:57.365721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.484 [2024-05-15 00:59:57.366005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.484 [2024-05-15 00:59:57.366028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.484 [2024-05-15 00:59:57.366043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.484 [2024-05-15 00:59:57.370161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.484 [2024-05-15 00:59:57.379163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.484 [2024-05-15 00:59:57.379809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.380092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.380143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.484 [2024-05-15 00:59:57.380161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.484 [2024-05-15 00:59:57.380433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.484 [2024-05-15 00:59:57.380709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.484 [2024-05-15 00:59:57.380731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.484 [2024-05-15 00:59:57.380745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.484 [2024-05-15 00:59:57.384908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.484 [2024-05-15 00:59:57.393625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.484 [2024-05-15 00:59:57.394229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.394505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.394550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.484 [2024-05-15 00:59:57.394568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.484 [2024-05-15 00:59:57.394839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.484 [2024-05-15 00:59:57.395121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.484 [2024-05-15 00:59:57.395144] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.484 [2024-05-15 00:59:57.395159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.484 [2024-05-15 00:59:57.399280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.484 [2024-05-15 00:59:57.408320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.484 [2024-05-15 00:59:57.408866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.409234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.484 [2024-05-15 00:59:57.409283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.484 [2024-05-15 00:59:57.409301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.485 [2024-05-15 00:59:57.409579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.485 [2024-05-15 00:59:57.409850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.485 [2024-05-15 00:59:57.409871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.485 [2024-05-15 00:59:57.409886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.485 [2024-05-15 00:59:57.414005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.485 [2024-05-15 00:59:57.422749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.485 [2024-05-15 00:59:57.423352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.423630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.423658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.485 [2024-05-15 00:59:57.423675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.485 [2024-05-15 00:59:57.423967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.485 [2024-05-15 00:59:57.424238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.485 [2024-05-15 00:59:57.424266] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.485 [2024-05-15 00:59:57.424281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.485 [2024-05-15 00:59:57.428405] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.485 [2024-05-15 00:59:57.437361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.485 [2024-05-15 00:59:57.437970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.438255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.438283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.485 [2024-05-15 00:59:57.438301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.485 [2024-05-15 00:59:57.438573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.485 [2024-05-15 00:59:57.438844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.485 [2024-05-15 00:59:57.438866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.485 [2024-05-15 00:59:57.438881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.485 [2024-05-15 00:59:57.442977] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.485 [2024-05-15 00:59:57.451991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.485 [2024-05-15 00:59:57.452524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.452812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.452859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.485 [2024-05-15 00:59:57.452876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.485 [2024-05-15 00:59:57.453154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.485 [2024-05-15 00:59:57.453431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.485 [2024-05-15 00:59:57.453453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.485 [2024-05-15 00:59:57.453467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.485 [2024-05-15 00:59:57.457576] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.485 [2024-05-15 00:59:57.466527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.485 [2024-05-15 00:59:57.467029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.467225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.467253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.485 [2024-05-15 00:59:57.467271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.485 [2024-05-15 00:59:57.467543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.485 [2024-05-15 00:59:57.467812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.485 [2024-05-15 00:59:57.467834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.485 [2024-05-15 00:59:57.467855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.485 [2024-05-15 00:59:57.471945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.485 [2024-05-15 00:59:57.481209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.485 [2024-05-15 00:59:57.481841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.482032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.482062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.485 [2024-05-15 00:59:57.482080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.485 [2024-05-15 00:59:57.482352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.485 [2024-05-15 00:59:57.482628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.485 [2024-05-15 00:59:57.482652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.485 [2024-05-15 00:59:57.482667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.485 [2024-05-15 00:59:57.486792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.485 [2024-05-15 00:59:57.495740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.485 [2024-05-15 00:59:57.496386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.496578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.496606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.485 [2024-05-15 00:59:57.496624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.485 [2024-05-15 00:59:57.496897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.485 [2024-05-15 00:59:57.497179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.485 [2024-05-15 00:59:57.497202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.485 [2024-05-15 00:59:57.497217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.485 [2024-05-15 00:59:57.501362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.485 [2024-05-15 00:59:57.510411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.485 [2024-05-15 00:59:57.510896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.511104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.511129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.485 [2024-05-15 00:59:57.511146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.485 [2024-05-15 00:59:57.511411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.485 [2024-05-15 00:59:57.511679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.485 [2024-05-15 00:59:57.511701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.485 [2024-05-15 00:59:57.511716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.485 [2024-05-15 00:59:57.515875] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.485 [2024-05-15 00:59:57.524876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.485 [2024-05-15 00:59:57.525490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.525754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.525802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.485 [2024-05-15 00:59:57.525820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.485 [2024-05-15 00:59:57.526106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.485 [2024-05-15 00:59:57.526377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.485 [2024-05-15 00:59:57.526399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.485 [2024-05-15 00:59:57.526414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.485 [2024-05-15 00:59:57.530547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.485 [2024-05-15 00:59:57.539400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.485 [2024-05-15 00:59:57.539981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.540205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.485 [2024-05-15 00:59:57.540255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.485 [2024-05-15 00:59:57.540273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.746 [2024-05-15 00:59:57.540589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.746 [2024-05-15 00:59:57.540890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.746 [2024-05-15 00:59:57.540916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.746 [2024-05-15 00:59:57.540942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.746 [2024-05-15 00:59:57.545085] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.746 [2024-05-15 00:59:57.553821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.746 [2024-05-15 00:59:57.554280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.746 [2024-05-15 00:59:57.554547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.746 [2024-05-15 00:59:57.554601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.746 [2024-05-15 00:59:57.554619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.746 [2024-05-15 00:59:57.554891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.746 [2024-05-15 00:59:57.555172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.746 [2024-05-15 00:59:57.555197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.746 [2024-05-15 00:59:57.555214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.746 [2024-05-15 00:59:57.559344] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.746 [2024-05-15 00:59:57.568296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.746 [2024-05-15 00:59:57.568788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.746 [2024-05-15 00:59:57.568996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.746 [2024-05-15 00:59:57.569023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.746 [2024-05-15 00:59:57.569041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.746 [2024-05-15 00:59:57.569313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.746 [2024-05-15 00:59:57.569583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.746 [2024-05-15 00:59:57.569604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.746 [2024-05-15 00:59:57.569619] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.746 [2024-05-15 00:59:57.573727] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.746 [2024-05-15 00:59:57.582872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.746 [2024-05-15 00:59:57.583357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.746 [2024-05-15 00:59:57.583545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.746 [2024-05-15 00:59:57.583573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.746 [2024-05-15 00:59:57.583591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.746 [2024-05-15 00:59:57.583877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.746 [2024-05-15 00:59:57.584157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.746 [2024-05-15 00:59:57.584179] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.746 [2024-05-15 00:59:57.584195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.746 [2024-05-15 00:59:57.588275] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.746 [2024-05-15 00:59:57.597427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.746 [2024-05-15 00:59:57.598140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.746 [2024-05-15 00:59:57.598453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.746 [2024-05-15 00:59:57.598502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.746 [2024-05-15 00:59:57.598520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.746 [2024-05-15 00:59:57.598800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.746 [2024-05-15 00:59:57.599080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.599103] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.599121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.603214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.611909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.612444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.612689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.612738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.747 [2024-05-15 00:59:57.612756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.747 [2024-05-15 00:59:57.613041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.747 [2024-05-15 00:59:57.613312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.613334] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.613351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.617434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.626364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.626949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.627255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.627306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.747 [2024-05-15 00:59:57.627324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.747 [2024-05-15 00:59:57.627596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.747 [2024-05-15 00:59:57.627867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.627889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.627905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.632005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.640910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.641511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.641713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.641743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.747 [2024-05-15 00:59:57.641761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.747 [2024-05-15 00:59:57.642045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.747 [2024-05-15 00:59:57.642316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.642338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.642354] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.646431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.655338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.655939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.656216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.656264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.747 [2024-05-15 00:59:57.656283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.747 [2024-05-15 00:59:57.656556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.747 [2024-05-15 00:59:57.656826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.656848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.656863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.660954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.669856] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.670443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.670711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.670761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.747 [2024-05-15 00:59:57.670779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.747 [2024-05-15 00:59:57.671062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.747 [2024-05-15 00:59:57.671333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.671355] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.671370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.675448] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.684363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.684904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.685147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.685188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.747 [2024-05-15 00:59:57.685206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.747 [2024-05-15 00:59:57.685485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.747 [2024-05-15 00:59:57.685754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.685776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.685791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.689901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.698928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.699562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.699754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.699783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.747 [2024-05-15 00:59:57.699806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.747 [2024-05-15 00:59:57.700093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.747 [2024-05-15 00:59:57.700364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.700386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.700401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.704482] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.713379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.714017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.714272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.714336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.747 [2024-05-15 00:59:57.714354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.747 [2024-05-15 00:59:57.714626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.747 [2024-05-15 00:59:57.714897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.714919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.714944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.719047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.727946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.728474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.728770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.728818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.747 [2024-05-15 00:59:57.728836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.747 [2024-05-15 00:59:57.729120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.747 [2024-05-15 00:59:57.729391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.747 [2024-05-15 00:59:57.729414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.747 [2024-05-15 00:59:57.729429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.747 [2024-05-15 00:59:57.733554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.747 [2024-05-15 00:59:57.742464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.747 [2024-05-15 00:59:57.743004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.747 [2024-05-15 00:59:57.743276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.748 [2024-05-15 00:59:57.743304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.748 [2024-05-15 00:59:57.743322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.748 [2024-05-15 00:59:57.743606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.748 [2024-05-15 00:59:57.743876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.748 [2024-05-15 00:59:57.743898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.748 [2024-05-15 00:59:57.743914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.748 [2024-05-15 00:59:57.748014] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.748 [2024-05-15 00:59:57.756961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.748 [2024-05-15 00:59:57.757552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.748 [2024-05-15 00:59:57.757794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.748 [2024-05-15 00:59:57.757848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.748 [2024-05-15 00:59:57.757866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.748 [2024-05-15 00:59:57.758154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.748 [2024-05-15 00:59:57.758432] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.748 [2024-05-15 00:59:57.758454] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.748 [2024-05-15 00:59:57.758469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.748 [2024-05-15 00:59:57.762566] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.748 [2024-05-15 00:59:57.771515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.748 [2024-05-15 00:59:57.772126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.748 [2024-05-15 00:59:57.772412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.748 [2024-05-15 00:59:57.772439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.748 [2024-05-15 00:59:57.772457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.748 [2024-05-15 00:59:57.772742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.748 [2024-05-15 00:59:57.773029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.748 [2024-05-15 00:59:57.773052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.748 [2024-05-15 00:59:57.773067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.748 [2024-05-15 00:59:57.777188] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.748 [2024-05-15 00:59:57.786119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.748 [2024-05-15 00:59:57.786738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.748 [2024-05-15 00:59:57.787046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.748 [2024-05-15 00:59:57.787088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.748 [2024-05-15 00:59:57.787106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.748 [2024-05-15 00:59:57.787378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.748 [2024-05-15 00:59:57.787654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.748 [2024-05-15 00:59:57.787676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.748 [2024-05-15 00:59:57.787691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.748 [2024-05-15 00:59:57.791816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.748 [2024-05-15 00:59:57.800565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.748 [2024-05-15 00:59:57.801095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.748 [2024-05-15 00:59:57.801426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.748 [2024-05-15 00:59:57.801478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:10.748 [2024-05-15 00:59:57.801496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:10.748 [2024-05-15 00:59:57.801770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:10.748 [2024-05-15 00:59:57.802049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.748 [2024-05-15 00:59:57.802071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.748 [2024-05-15 00:59:57.802086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.008 [2024-05-15 00:59:57.806267] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.008 [2024-05-15 00:59:57.815032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.008 [2024-05-15 00:59:57.815562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.815801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.815828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.008 [2024-05-15 00:59:57.815845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.008 [2024-05-15 00:59:57.816121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.008 [2024-05-15 00:59:57.816392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.008 [2024-05-15 00:59:57.816414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.008 [2024-05-15 00:59:57.816428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.008 [2024-05-15 00:59:57.820554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.008 [2024-05-15 00:59:57.829631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.008 [2024-05-15 00:59:57.830250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.830518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.830565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.008 [2024-05-15 00:59:57.830583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.008 [2024-05-15 00:59:57.830856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.008 [2024-05-15 00:59:57.831139] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.008 [2024-05-15 00:59:57.831167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.008 [2024-05-15 00:59:57.831183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.008 [2024-05-15 00:59:57.835311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.008 [2024-05-15 00:59:57.844255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.008 [2024-05-15 00:59:57.844792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.845069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.845105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.008 [2024-05-15 00:59:57.845136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.008 [2024-05-15 00:59:57.845410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.008 [2024-05-15 00:59:57.845680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.008 [2024-05-15 00:59:57.845701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.008 [2024-05-15 00:59:57.845717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.008 [2024-05-15 00:59:57.849828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.008 [2024-05-15 00:59:57.858820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.008 [2024-05-15 00:59:57.859453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.859721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.859764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.008 [2024-05-15 00:59:57.859782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.008 [2024-05-15 00:59:57.860075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.008 [2024-05-15 00:59:57.860346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.008 [2024-05-15 00:59:57.860367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.008 [2024-05-15 00:59:57.860382] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.008 [2024-05-15 00:59:57.864520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.008 [2024-05-15 00:59:57.873239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.008 [2024-05-15 00:59:57.873827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.874142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.874191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.008 [2024-05-15 00:59:57.874209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.008 [2024-05-15 00:59:57.874481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.008 [2024-05-15 00:59:57.874750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.008 [2024-05-15 00:59:57.874772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.008 [2024-05-15 00:59:57.874794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.008 [2024-05-15 00:59:57.878912] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.008 [2024-05-15 00:59:57.887611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.008 [2024-05-15 00:59:57.888154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.888421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.888449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.008 [2024-05-15 00:59:57.888466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.008 [2024-05-15 00:59:57.888739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.008 [2024-05-15 00:59:57.889026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.008 [2024-05-15 00:59:57.889049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.008 [2024-05-15 00:59:57.889064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.008 [2024-05-15 00:59:57.893177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.008 [2024-05-15 00:59:57.902138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.008 [2024-05-15 00:59:57.902749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.903046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.008 [2024-05-15 00:59:57.903105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.008 [2024-05-15 00:59:57.903123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.008 [2024-05-15 00:59:57.903401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.008 [2024-05-15 00:59:57.903671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.008 [2024-05-15 00:59:57.903692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.008 [2024-05-15 00:59:57.903707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.009 [2024-05-15 00:59:57.907816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.009 [2024-05-15 00:59:57.916568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.009 [2024-05-15 00:59:57.917199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.917470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.917511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.009 [2024-05-15 00:59:57.917529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.009 [2024-05-15 00:59:57.917807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.009 [2024-05-15 00:59:57.918098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.009 [2024-05-15 00:59:57.918120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.009 [2024-05-15 00:59:57.918136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.009 [2024-05-15 00:59:57.922268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.009 [2024-05-15 00:59:57.931052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.009 [2024-05-15 00:59:57.931597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.931944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.931973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.009 [2024-05-15 00:59:57.931990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.009 [2024-05-15 00:59:57.932262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.009 [2024-05-15 00:59:57.932532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.009 [2024-05-15 00:59:57.932553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.009 [2024-05-15 00:59:57.932568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.009 [2024-05-15 00:59:57.936669] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.009 [2024-05-15 00:59:57.945647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.009 [2024-05-15 00:59:57.946239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.946552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.946602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.009 [2024-05-15 00:59:57.946620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.009 [2024-05-15 00:59:57.946892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.009 [2024-05-15 00:59:57.947181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.009 [2024-05-15 00:59:57.947204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.009 [2024-05-15 00:59:57.947219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.009 [2024-05-15 00:59:57.951339] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.009 [2024-05-15 00:59:57.960135] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.009 [2024-05-15 00:59:57.960733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.961051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.961091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.009 [2024-05-15 00:59:57.961110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.009 [2024-05-15 00:59:57.961382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.009 [2024-05-15 00:59:57.961651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.009 [2024-05-15 00:59:57.961673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.009 [2024-05-15 00:59:57.961688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.009 [2024-05-15 00:59:57.965806] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.009 [2024-05-15 00:59:57.974764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.009 [2024-05-15 00:59:57.975356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.975704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.975752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.009 [2024-05-15 00:59:57.975770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.009 [2024-05-15 00:59:57.976064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.009 [2024-05-15 00:59:57.976341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.009 [2024-05-15 00:59:57.976363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.009 [2024-05-15 00:59:57.976378] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.009 [2024-05-15 00:59:57.980519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.009 [2024-05-15 00:59:57.989197] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.009 [2024-05-15 00:59:57.989800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.990122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:57.990162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.009 [2024-05-15 00:59:57.990181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.009 [2024-05-15 00:59:57.990453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.009 [2024-05-15 00:59:57.990723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.009 [2024-05-15 00:59:57.990745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.009 [2024-05-15 00:59:57.990759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.009 [2024-05-15 00:59:57.994948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.009 [2024-05-15 00:59:58.003656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.009 [2024-05-15 00:59:58.004183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:58.004426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:58.004454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.009 [2024-05-15 00:59:58.004472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.009 [2024-05-15 00:59:58.004744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.009 [2024-05-15 00:59:58.005027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.009 [2024-05-15 00:59:58.005050] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.009 [2024-05-15 00:59:58.005065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.009 [2024-05-15 00:59:58.009185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.009 [2024-05-15 00:59:58.018199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.009 [2024-05-15 00:59:58.018676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:58.018928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.009 [2024-05-15 00:59:58.019018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.009 [2024-05-15 00:59:58.019035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.009 [2024-05-15 00:59:58.019302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.009 [2024-05-15 00:59:58.019570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.009 [2024-05-15 00:59:58.019592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.009 [2024-05-15 00:59:58.019607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.009 [2024-05-15 00:59:58.023778] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.009 [2024-05-15 00:59:58.032795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.009 [2024-05-15 00:59:58.033422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.010 [2024-05-15 00:59:58.033759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.010 [2024-05-15 00:59:58.033810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.010 [2024-05-15 00:59:58.033828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.010 [2024-05-15 00:59:58.034117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.010 [2024-05-15 00:59:58.034388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.010 [2024-05-15 00:59:58.034410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.010 [2024-05-15 00:59:58.034425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.010 [2024-05-15 00:59:58.038512] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.010 [2024-05-15 00:59:58.047176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.010 [2024-05-15 00:59:58.047706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.010 [2024-05-15 00:59:58.048086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.010 [2024-05-15 00:59:58.048138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.010 [2024-05-15 00:59:58.048157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.010 [2024-05-15 00:59:58.048430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.010 [2024-05-15 00:59:58.048700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.010 [2024-05-15 00:59:58.048722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.010 [2024-05-15 00:59:58.048737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.010 [2024-05-15 00:59:58.052881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.010 [2024-05-15 00:59:58.061606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.010 [2024-05-15 00:59:58.062177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.010 [2024-05-15 00:59:58.062350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.010 [2024-05-15 00:59:58.062380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.010 [2024-05-15 00:59:58.062403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.010 [2024-05-15 00:59:58.062683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.010 [2024-05-15 00:59:58.062986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.010 [2024-05-15 00:59:58.063021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.010 [2024-05-15 00:59:58.063047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.269 [2024-05-15 00:59:58.067221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.269 [2024-05-15 00:59:58.076248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.269 [2024-05-15 00:59:58.076755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.269 [2024-05-15 00:59:58.077026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.269 [2024-05-15 00:59:58.077108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.269 [2024-05-15 00:59:58.077127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.269 [2024-05-15 00:59:58.077405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.269 [2024-05-15 00:59:58.077674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.269 [2024-05-15 00:59:58.077697] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.269 [2024-05-15 00:59:58.077711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.269 [2024-05-15 00:59:58.081815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.269 [2024-05-15 00:59:58.090849] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.269 [2024-05-15 00:59:58.091458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.269 [2024-05-15 00:59:58.091787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.269 [2024-05-15 00:59:58.091836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.269 [2024-05-15 00:59:58.091854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.269 [2024-05-15 00:59:58.092146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.269 [2024-05-15 00:59:58.092416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.269 [2024-05-15 00:59:58.092438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.269 [2024-05-15 00:59:58.092453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.269 [2024-05-15 00:59:58.096581] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.269 [2024-05-15 00:59:58.105285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.269 [2024-05-15 00:59:58.105888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.269 [2024-05-15 00:59:58.106185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.269 [2024-05-15 00:59:58.106238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.269 [2024-05-15 00:59:58.106257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.269 [2024-05-15 00:59:58.106544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.269 [2024-05-15 00:59:58.106819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.269 [2024-05-15 00:59:58.106841] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.269 [2024-05-15 00:59:58.106856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.269 [2024-05-15 00:59:58.111057] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.269 [2024-05-15 00:59:58.120049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.269 [2024-05-15 00:59:58.120615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.269 [2024-05-15 00:59:58.120912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.269 [2024-05-15 00:59:58.120973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.269 [2024-05-15 00:59:58.120991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.121264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.121533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.270 [2024-05-15 00:59:58.121555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.270 [2024-05-15 00:59:58.121570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.270 [2024-05-15 00:59:58.125697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.270 [2024-05-15 00:59:58.134508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.270 [2024-05-15 00:59:58.135014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.135384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.135439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.270 [2024-05-15 00:59:58.135458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.135736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.136025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.270 [2024-05-15 00:59:58.136048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.270 [2024-05-15 00:59:58.136063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.270 [2024-05-15 00:59:58.140195] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.270 [2024-05-15 00:59:58.148951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.270 [2024-05-15 00:59:58.149599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.149890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.149957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.270 [2024-05-15 00:59:58.149976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.150249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.150524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.270 [2024-05-15 00:59:58.150547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.270 [2024-05-15 00:59:58.150561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.270 [2024-05-15 00:59:58.154685] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.270 [2024-05-15 00:59:58.163508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.270 [2024-05-15 00:59:58.164177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.164526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.164576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.270 [2024-05-15 00:59:58.164594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.164866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.165148] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.270 [2024-05-15 00:59:58.165171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.270 [2024-05-15 00:59:58.165186] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.270 [2024-05-15 00:59:58.169297] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.270 [2024-05-15 00:59:58.178091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.270 [2024-05-15 00:59:58.178632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.178966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.178995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.270 [2024-05-15 00:59:58.179013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.179285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.179555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.270 [2024-05-15 00:59:58.179577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.270 [2024-05-15 00:59:58.179592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.270 [2024-05-15 00:59:58.183752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.270 [2024-05-15 00:59:58.192501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.270 [2024-05-15 00:59:58.193173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.193521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.193572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.270 [2024-05-15 00:59:58.193590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.193865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.194156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.270 [2024-05-15 00:59:58.194188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.270 [2024-05-15 00:59:58.194204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.270 [2024-05-15 00:59:58.198309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.270 [2024-05-15 00:59:58.207058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.270 [2024-05-15 00:59:58.207607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.207926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.207987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.270 [2024-05-15 00:59:58.208005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.208283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.208553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.270 [2024-05-15 00:59:58.208575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.270 [2024-05-15 00:59:58.208589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.270 [2024-05-15 00:59:58.212746] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.270 [2024-05-15 00:59:58.221474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.270 [2024-05-15 00:59:58.222071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.222373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.222414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.270 [2024-05-15 00:59:58.222433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.222705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.223015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.270 [2024-05-15 00:59:58.223039] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.270 [2024-05-15 00:59:58.223054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.270 [2024-05-15 00:59:58.227203] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.270 [2024-05-15 00:59:58.235936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.270 [2024-05-15 00:59:58.236403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.236723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.236776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.270 [2024-05-15 00:59:58.236794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.237079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.237350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.270 [2024-05-15 00:59:58.237372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.270 [2024-05-15 00:59:58.237397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.270 [2024-05-15 00:59:58.241528] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.270 [2024-05-15 00:59:58.250542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.270 [2024-05-15 00:59:58.251118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.251467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.270 [2024-05-15 00:59:58.251517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.270 [2024-05-15 00:59:58.251534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.270 [2024-05-15 00:59:58.251806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.270 [2024-05-15 00:59:58.252100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.271 [2024-05-15 00:59:58.252123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.271 [2024-05-15 00:59:58.252138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.271 [2024-05-15 00:59:58.256266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.271 [2024-05-15 00:59:58.264965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.271 [2024-05-15 00:59:58.265590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.265876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.265904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.271 [2024-05-15 00:59:58.265921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.271 [2024-05-15 00:59:58.266213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.271 [2024-05-15 00:59:58.266484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.271 [2024-05-15 00:59:58.266506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.271 [2024-05-15 00:59:58.266521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.271 [2024-05-15 00:59:58.270651] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.271 [2024-05-15 00:59:58.279384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.271 [2024-05-15 00:59:58.279898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.280230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.280272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.271 [2024-05-15 00:59:58.280291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.271 [2024-05-15 00:59:58.280563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.271 [2024-05-15 00:59:58.280832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.271 [2024-05-15 00:59:58.280854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.271 [2024-05-15 00:59:58.280869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.271 [2024-05-15 00:59:58.284989] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.271 [2024-05-15 00:59:58.293893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.271 [2024-05-15 00:59:58.294474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.294767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.294817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.271 [2024-05-15 00:59:58.294834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.271 [2024-05-15 00:59:58.295111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.271 [2024-05-15 00:59:58.295381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.271 [2024-05-15 00:59:58.295402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.271 [2024-05-15 00:59:58.295417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.271 [2024-05-15 00:59:58.299518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.271 [2024-05-15 00:59:58.308446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.271 [2024-05-15 00:59:58.309002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.309340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.309397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.271 [2024-05-15 00:59:58.309416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.271 [2024-05-15 00:59:58.309687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.271 [2024-05-15 00:59:58.309967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.271 [2024-05-15 00:59:58.309989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.271 [2024-05-15 00:59:58.310004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.271 [2024-05-15 00:59:58.314079] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.271 [2024-05-15 00:59:58.323059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.271 [2024-05-15 00:59:58.323694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.323967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-05-15 00:59:58.324031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.271 [2024-05-15 00:59:58.324049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.271 [2024-05-15 00:59:58.324338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.271 [2024-05-15 00:59:58.324635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.271 [2024-05-15 00:59:58.324660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.271 [2024-05-15 00:59:58.324676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.530 [2024-05-15 00:59:58.328830] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.530 [2024-05-15 00:59:58.337599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.530 [2024-05-15 00:59:58.338223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.530 [2024-05-15 00:59:58.338530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.530 [2024-05-15 00:59:58.338579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.530 [2024-05-15 00:59:58.338597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.530 [2024-05-15 00:59:58.338870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.530 [2024-05-15 00:59:58.339147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.530 [2024-05-15 00:59:58.339169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.530 [2024-05-15 00:59:58.339184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.530 [2024-05-15 00:59:58.343300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.530 [2024-05-15 00:59:58.351993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.530 [2024-05-15 00:59:58.352614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.530 [2024-05-15 00:59:58.352802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.530 [2024-05-15 00:59:58.352830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.530 [2024-05-15 00:59:58.352848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.530 [2024-05-15 00:59:58.353133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.530 [2024-05-15 00:59:58.353404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.530 [2024-05-15 00:59:58.353426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.530 [2024-05-15 00:59:58.353440] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.530 [2024-05-15 00:59:58.357544] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.530 [2024-05-15 00:59:58.366510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.530 [2024-05-15 00:59:58.367095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.530 [2024-05-15 00:59:58.367347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.530 [2024-05-15 00:59:58.367377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.367395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.367673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.367959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.367982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.531 [2024-05-15 00:59:58.367997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.531 [2024-05-15 00:59:58.372110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.531 [2024-05-15 00:59:58.381025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.531 [2024-05-15 00:59:58.381657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.382028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.382069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.382088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.382361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.382631] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.382653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.531 [2024-05-15 00:59:58.382668] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.531 [2024-05-15 00:59:58.386821] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.531 [2024-05-15 00:59:58.395569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.531 [2024-05-15 00:59:58.396013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.396240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.396301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.396319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.396586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.396855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.396876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.531 [2024-05-15 00:59:58.396891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.531 [2024-05-15 00:59:58.401039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.531 [2024-05-15 00:59:58.410115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.531 [2024-05-15 00:59:58.410716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.411007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.411039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.411057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.411329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.411599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.411620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.531 [2024-05-15 00:59:58.411635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.531 [2024-05-15 00:59:58.415721] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.531 [2024-05-15 00:59:58.424648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.531 [2024-05-15 00:59:58.425119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.425430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.425484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.425503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.425775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.426057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.426080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.531 [2024-05-15 00:59:58.426095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.531 [2024-05-15 00:59:58.430210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.531 [2024-05-15 00:59:58.439183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.531 [2024-05-15 00:59:58.439806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.440151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.440192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.440211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.440483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.440753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.440775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.531 [2024-05-15 00:59:58.440790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.531 [2024-05-15 00:59:58.444890] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.531 [2024-05-15 00:59:58.453545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.531 [2024-05-15 00:59:58.454033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.454362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.454412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.454429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.454709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.454991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.455014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.531 [2024-05-15 00:59:58.455029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.531 [2024-05-15 00:59:58.459108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.531 [2024-05-15 00:59:58.468059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.531 [2024-05-15 00:59:58.468598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.468833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.468901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.468927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.469217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.469494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.469516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.531 [2024-05-15 00:59:58.469530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.531 [2024-05-15 00:59:58.473642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.531 [2024-05-15 00:59:58.482616] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.531 [2024-05-15 00:59:58.483209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.483521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.483584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.483602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.483880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.484160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.484182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.531 [2024-05-15 00:59:58.484197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.531 [2024-05-15 00:59:58.488291] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.531 [2024-05-15 00:59:58.497024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.531 [2024-05-15 00:59:58.497628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.497986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.531 [2024-05-15 00:59:58.498016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.531 [2024-05-15 00:59:58.498034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.531 [2024-05-15 00:59:58.498307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.531 [2024-05-15 00:59:58.498576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.531 [2024-05-15 00:59:58.498598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.532 [2024-05-15 00:59:58.498613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.532 [2024-05-15 00:59:58.502711] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.532 [2024-05-15 00:59:58.511599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.532 [2024-05-15 00:59:58.512178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.512455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.512504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.532 [2024-05-15 00:59:58.512521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.532 [2024-05-15 00:59:58.512800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.532 [2024-05-15 00:59:58.513083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.532 [2024-05-15 00:59:58.513105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.532 [2024-05-15 00:59:58.513120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.532 [2024-05-15 00:59:58.517245] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.532 [2024-05-15 00:59:58.526200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.532 [2024-05-15 00:59:58.526687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.526911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.526950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.532 [2024-05-15 00:59:58.526969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.532 [2024-05-15 00:59:58.527241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.532 [2024-05-15 00:59:58.527511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.532 [2024-05-15 00:59:58.527533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.532 [2024-05-15 00:59:58.527548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.532 [2024-05-15 00:59:58.531718] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.532 [2024-05-15 00:59:58.540729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.532 [2024-05-15 00:59:58.541331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.541579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.541624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.532 [2024-05-15 00:59:58.541641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.532 [2024-05-15 00:59:58.541914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.532 [2024-05-15 00:59:58.542195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.532 [2024-05-15 00:59:58.542217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.532 [2024-05-15 00:59:58.542232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.532 [2024-05-15 00:59:58.546346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.532 [2024-05-15 00:59:58.555301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.532 [2024-05-15 00:59:58.555788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.556089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.556140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.532 [2024-05-15 00:59:58.556158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.532 [2024-05-15 00:59:58.556430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.532 [2024-05-15 00:59:58.556708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.532 [2024-05-15 00:59:58.556730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.532 [2024-05-15 00:59:58.556745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.532 [2024-05-15 00:59:58.560841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.532 [2024-05-15 00:59:58.569754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.532 [2024-05-15 00:59:58.570327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.570570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.570649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.532 [2024-05-15 00:59:58.570667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.532 [2024-05-15 00:59:58.570941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.532 [2024-05-15 00:59:58.571210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.532 [2024-05-15 00:59:58.571231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.532 [2024-05-15 00:59:58.571246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.532 [2024-05-15 00:59:58.575381] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.532 [2024-05-15 00:59:58.584403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.532 [2024-05-15 00:59:58.584948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.585185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.532 [2024-05-15 00:59:58.585238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.532 [2024-05-15 00:59:58.585256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.532 [2024-05-15 00:59:58.585555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.532 [2024-05-15 00:59:58.585834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.532 [2024-05-15 00:59:58.585856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.532 [2024-05-15 00:59:58.585871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.792 [2024-05-15 00:59:58.590056] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.792 [2024-05-15 00:59:58.598822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.792 [2024-05-15 00:59:58.599390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.599728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.599778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.792 [2024-05-15 00:59:58.599796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.792 [2024-05-15 00:59:58.600079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.792 [2024-05-15 00:59:58.600350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.792 [2024-05-15 00:59:58.600377] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.792 [2024-05-15 00:59:58.600393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.792 [2024-05-15 00:59:58.604518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.792 [2024-05-15 00:59:58.613438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.792 [2024-05-15 00:59:58.614140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.614405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.614458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.792 [2024-05-15 00:59:58.614476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.792 [2024-05-15 00:59:58.614754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.792 [2024-05-15 00:59:58.615037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.792 [2024-05-15 00:59:58.615059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.792 [2024-05-15 00:59:58.615074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.792 [2024-05-15 00:59:58.619183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.792 [2024-05-15 00:59:58.627909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.792 [2024-05-15 00:59:58.628470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.628751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.628779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.792 [2024-05-15 00:59:58.628797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.792 [2024-05-15 00:59:58.629082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.792 [2024-05-15 00:59:58.629359] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.792 [2024-05-15 00:59:58.629381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.792 [2024-05-15 00:59:58.629395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.792 [2024-05-15 00:59:58.633505] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.792 [2024-05-15 00:59:58.642415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.792 [2024-05-15 00:59:58.642907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.643230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.643279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.792 [2024-05-15 00:59:58.643296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.792 [2024-05-15 00:59:58.643569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.792 [2024-05-15 00:59:58.643839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.792 [2024-05-15 00:59:58.643861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.792 [2024-05-15 00:59:58.643883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.792 [2024-05-15 00:59:58.648017] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.792 [2024-05-15 00:59:58.656982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.792 [2024-05-15 00:59:58.657576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.657865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.657893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.792 [2024-05-15 00:59:58.657911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.792 [2024-05-15 00:59:58.658200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.792 [2024-05-15 00:59:58.658471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.792 [2024-05-15 00:59:58.658492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.792 [2024-05-15 00:59:58.658507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.792 [2024-05-15 00:59:58.662636] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.792 [2024-05-15 00:59:58.671366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.792 [2024-05-15 00:59:58.671834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.672145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.672199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.792 [2024-05-15 00:59:58.672217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.792 [2024-05-15 00:59:58.672490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.792 [2024-05-15 00:59:58.672759] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.792 [2024-05-15 00:59:58.672781] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.792 [2024-05-15 00:59:58.672795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.792 [2024-05-15 00:59:58.676885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.792 [2024-05-15 00:59:58.685819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.792 [2024-05-15 00:59:58.686423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.686728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.792 [2024-05-15 00:59:58.686779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.792 [2024-05-15 00:59:58.686797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.792 [2024-05-15 00:59:58.687084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.792 [2024-05-15 00:59:58.687361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.792 [2024-05-15 00:59:58.687383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.792 [2024-05-15 00:59:58.687398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.792 [2024-05-15 00:59:58.691526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.792 [2024-05-15 00:59:58.700247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.792 [2024-05-15 00:59:58.700792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.701037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.701078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.793 [2024-05-15 00:59:58.701097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.793 [2024-05-15 00:59:58.701370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.793 [2024-05-15 00:59:58.701639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.793 [2024-05-15 00:59:58.701661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.793 [2024-05-15 00:59:58.701676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.793 [2024-05-15 00:59:58.705769] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.793 [2024-05-15 00:59:58.714669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.793 [2024-05-15 00:59:58.715264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.715564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.715614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.793 [2024-05-15 00:59:58.715632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.793 [2024-05-15 00:59:58.715905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.793 [2024-05-15 00:59:58.716184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.793 [2024-05-15 00:59:58.716206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.793 [2024-05-15 00:59:58.716222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.793 [2024-05-15 00:59:58.720309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.793 [2024-05-15 00:59:58.729230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.793 [2024-05-15 00:59:58.729796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.730138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.730193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.793 [2024-05-15 00:59:58.730212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.793 [2024-05-15 00:59:58.730486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.793 [2024-05-15 00:59:58.730755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.793 [2024-05-15 00:59:58.730777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.793 [2024-05-15 00:59:58.730792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.793 [2024-05-15 00:59:58.734892] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.793 [2024-05-15 00:59:58.743800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.793 [2024-05-15 00:59:58.744444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.744723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.744768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.793 [2024-05-15 00:59:58.744786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.793 [2024-05-15 00:59:58.745069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.793 [2024-05-15 00:59:58.745340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.793 [2024-05-15 00:59:58.745361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.793 [2024-05-15 00:59:58.745376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.793 [2024-05-15 00:59:58.749467] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.793 [2024-05-15 00:59:58.758401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.793 [2024-05-15 00:59:58.759011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.759353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.759404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.793 [2024-05-15 00:59:58.759421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.793 [2024-05-15 00:59:58.759694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.793 [2024-05-15 00:59:58.759977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.793 [2024-05-15 00:59:58.759999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.793 [2024-05-15 00:59:58.760014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.793 [2024-05-15 00:59:58.764116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.793 [2024-05-15 00:59:58.772794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.793 [2024-05-15 00:59:58.773418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.773728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.773778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.793 [2024-05-15 00:59:58.773796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.793 [2024-05-15 00:59:58.774081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.793 [2024-05-15 00:59:58.774352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.793 [2024-05-15 00:59:58.774374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.793 [2024-05-15 00:59:58.774388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.793 [2024-05-15 00:59:58.778469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.793 [2024-05-15 00:59:58.787367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.793 [2024-05-15 00:59:58.787800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.788022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.788051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.793 [2024-05-15 00:59:58.788069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.793 [2024-05-15 00:59:58.788336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.793 [2024-05-15 00:59:58.788611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.793 [2024-05-15 00:59:58.788633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.793 [2024-05-15 00:59:58.788648] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.793 [2024-05-15 00:59:58.792753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.793 [2024-05-15 00:59:58.801955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.793 [2024-05-15 00:59:58.802498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.802785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.802839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.793 [2024-05-15 00:59:58.802857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.793 [2024-05-15 00:59:58.803141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.793 [2024-05-15 00:59:58.803411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.793 [2024-05-15 00:59:58.803433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.793 [2024-05-15 00:59:58.803448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.793 [2024-05-15 00:59:58.807524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.793 [2024-05-15 00:59:58.816431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.793 [2024-05-15 00:59:58.816926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.817217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.817258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.793 [2024-05-15 00:59:58.817275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.793 [2024-05-15 00:59:58.817553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.793 [2024-05-15 00:59:58.817822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.793 [2024-05-15 00:59:58.817843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.793 [2024-05-15 00:59:58.817858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.793 [2024-05-15 00:59:58.821961] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.793 [2024-05-15 00:59:58.830908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.793 [2024-05-15 00:59:58.831390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.831543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.793 [2024-05-15 00:59:58.831569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.794 [2024-05-15 00:59:58.831592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.794 [2024-05-15 00:59:58.831865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.794 [2024-05-15 00:59:58.832145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.794 [2024-05-15 00:59:58.832168] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.794 [2024-05-15 00:59:58.832183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.794 [2024-05-15 00:59:58.836324] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.794 [2024-05-15 00:59:58.845557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.794 [2024-05-15 00:59:58.846034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.794 [2024-05-15 00:59:58.846307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.794 [2024-05-15 00:59:58.846355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:11.794 [2024-05-15 00:59:58.846373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:11.794 [2024-05-15 00:59:58.846639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:11.794 [2024-05-15 00:59:58.846953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.794 [2024-05-15 00:59:58.846985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.794 [2024-05-15 00:59:58.847001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.053 [2024-05-15 00:59:58.851159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.053 [2024-05-15 00:59:58.860132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.053 [2024-05-15 00:59:58.860631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.053 [2024-05-15 00:59:58.860855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.053 [2024-05-15 00:59:58.860898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.053 [2024-05-15 00:59:58.860916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.053 [2024-05-15 00:59:58.861191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.053 [2024-05-15 00:59:58.861463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.053 [2024-05-15 00:59:58.861484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.053 [2024-05-15 00:59:58.861499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.053 [2024-05-15 00:59:58.865614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.053 [2024-05-15 00:59:58.874571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.053 [2024-05-15 00:59:58.875029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.053 [2024-05-15 00:59:58.875274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.053 [2024-05-15 00:59:58.875306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.053 [2024-05-15 00:59:58.875340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.053 [2024-05-15 00:59:58.875633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.053 [2024-05-15 00:59:58.875902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.053 [2024-05-15 00:59:58.875923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.053 [2024-05-15 00:59:58.875949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.053 [2024-05-15 00:59:58.880054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.053 [2024-05-15 00:59:58.888998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.053 [2024-05-15 00:59:58.889523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.053 [2024-05-15 00:59:58.889787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.053 [2024-05-15 00:59:58.889837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.053 [2024-05-15 00:59:58.889853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.053 [2024-05-15 00:59:58.890128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.053 [2024-05-15 00:59:58.890397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.053 [2024-05-15 00:59:58.890418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.053 [2024-05-15 00:59:58.890432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.053 [2024-05-15 00:59:58.894532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.053 [2024-05-15 00:59:58.903462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.053 [2024-05-15 00:59:58.904021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.053 [2024-05-15 00:59:58.904244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.053 [2024-05-15 00:59:58.904272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.053 [2024-05-15 00:59:58.904289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.053 [2024-05-15 00:59:58.904562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.054 [2024-05-15 00:59:58.904831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.054 [2024-05-15 00:59:58.904854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.054 [2024-05-15 00:59:58.904869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.054 [2024-05-15 00:59:58.908985] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.054 [2024-05-15 00:59:58.917916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.054 [2024-05-15 00:59:58.918403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.918684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.918730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.054 [2024-05-15 00:59:58.918747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.054 [2024-05-15 00:59:58.919026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.054 [2024-05-15 00:59:58.919326] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.054 [2024-05-15 00:59:58.919348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.054 [2024-05-15 00:59:58.919363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.054 [2024-05-15 00:59:58.923495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.054 [2024-05-15 00:59:58.932505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.054 [2024-05-15 00:59:58.933064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.933336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.933364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.054 [2024-05-15 00:59:58.933382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.054 [2024-05-15 00:59:58.933655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.054 [2024-05-15 00:59:58.933925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.054 [2024-05-15 00:59:58.933957] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.054 [2024-05-15 00:59:58.933972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.054 [2024-05-15 00:59:58.938073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.054 [2024-05-15 00:59:58.947055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.054 [2024-05-15 00:59:58.947637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.947856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.947900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.054 [2024-05-15 00:59:58.947917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.054 [2024-05-15 00:59:58.948200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.054 [2024-05-15 00:59:58.948471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.054 [2024-05-15 00:59:58.948493] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.054 [2024-05-15 00:59:58.948508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.054 [2024-05-15 00:59:58.952647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.054 [2024-05-15 00:59:58.961620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.054 [2024-05-15 00:59:58.962131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.962413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.962458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.054 [2024-05-15 00:59:58.962476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.054 [2024-05-15 00:59:58.962749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.054 [2024-05-15 00:59:58.963032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.054 [2024-05-15 00:59:58.963060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.054 [2024-05-15 00:59:58.963076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.054 [2024-05-15 00:59:58.967176] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.054 [2024-05-15 00:59:58.976165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.054 [2024-05-15 00:59:58.976670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.976881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.976907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.054 [2024-05-15 00:59:58.976924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.054 [2024-05-15 00:59:58.977200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.054 [2024-05-15 00:59:58.977471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.054 [2024-05-15 00:59:58.977493] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.054 [2024-05-15 00:59:58.977507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.054 [2024-05-15 00:59:58.981655] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.054 [2024-05-15 00:59:58.990616] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.054 [2024-05-15 00:59:58.991151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.991363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:58.991398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.054 [2024-05-15 00:59:58.991428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.054 [2024-05-15 00:59:58.991701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.054 [2024-05-15 00:59:58.991982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.054 [2024-05-15 00:59:58.992005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.054 [2024-05-15 00:59:58.992019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.054 [2024-05-15 00:59:58.996132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.054 [2024-05-15 00:59:59.005093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.054 [2024-05-15 00:59:59.005683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:59.005942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:59.005971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.054 [2024-05-15 00:59:59.005989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.054 [2024-05-15 00:59:59.006262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.054 [2024-05-15 00:59:59.006531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.054 [2024-05-15 00:59:59.006552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.054 [2024-05-15 00:59:59.006587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.054 [2024-05-15 00:59:59.010716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4084138 Killed "${NVMF_APP[@]}" "$@" 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.054 [2024-05-15 00:59:59.019680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.054 [2024-05-15 00:59:59.020179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:59.020404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.054 [2024-05-15 00:59:59.020434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.054 [2024-05-15 00:59:59.020463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.054 [2024-05-15 00:59:59.020730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4084956 00:23:12.054 [2024-05-15 00:59:59.021016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.054 [2024-05-15 00:59:59.021042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.054 [2024-05-15 00:59:59.021057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4084956 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 4084956 ']' 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.054 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.055 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.055 [2024-05-15 00:59:59.025156] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.055 [2024-05-15 00:59:59.034088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.055 [2024-05-15 00:59:59.034554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.034704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.034729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.055 [2024-05-15 00:59:59.034747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.055 [2024-05-15 00:59:59.035022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.055 [2024-05-15 00:59:59.035292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.055 [2024-05-15 00:59:59.035320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.055 [2024-05-15 00:59:59.035335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.055 [2024-05-15 00:59:59.039441] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.055 [2024-05-15 00:59:59.048601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.055 [2024-05-15 00:59:59.049099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.049295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.049326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.055 [2024-05-15 00:59:59.049344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.055 [2024-05-15 00:59:59.049623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.055 [2024-05-15 00:59:59.049894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.055 [2024-05-15 00:59:59.049916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.055 [2024-05-15 00:59:59.049938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.055 [2024-05-15 00:59:59.054030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.055 [2024-05-15 00:59:59.063179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.055 [2024-05-15 00:59:59.063713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.063945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.063974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.055 [2024-05-15 00:59:59.063994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.055 [2024-05-15 00:59:59.064275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.055 [2024-05-15 00:59:59.064548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.055 [2024-05-15 00:59:59.064570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.055 [2024-05-15 00:59:59.064587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.055 [2024-05-15 00:59:59.068680] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.055 [2024-05-15 00:59:59.069675] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:12.055 [2024-05-15 00:59:59.069743] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.055 [2024-05-15 00:59:59.077584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.055 [2024-05-15 00:59:59.078031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.078203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.078234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.055 [2024-05-15 00:59:59.078252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.055 [2024-05-15 00:59:59.078525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.055 [2024-05-15 00:59:59.078802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.055 [2024-05-15 00:59:59.078824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.055 [2024-05-15 00:59:59.078840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.055 [2024-05-15 00:59:59.082919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.055 [2024-05-15 00:59:59.092064] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.055 [2024-05-15 00:59:59.092533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.092749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.092777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.055 [2024-05-15 00:59:59.092795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.055 [2024-05-15 00:59:59.093079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.055 [2024-05-15 00:59:59.093356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.055 [2024-05-15 00:59:59.093378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.055 [2024-05-15 00:59:59.093393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.055 [2024-05-15 00:59:59.097469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.055 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.055 [2024-05-15 00:59:59.106669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.055 [2024-05-15 00:59:59.107192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.107369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.055 [2024-05-15 00:59:59.107399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.055 [2024-05-15 00:59:59.107425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.055 [2024-05-15 00:59:59.107700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.055 [2024-05-15 00:59:59.107980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.055 [2024-05-15 00:59:59.108004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.055 [2024-05-15 00:59:59.108020] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.315 [2024-05-15 00:59:59.112227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.315 [2024-05-15 00:59:59.121167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.315 [2024-05-15 00:59:59.121644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.121821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.121848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.315 [2024-05-15 00:59:59.121866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.315 [2024-05-15 00:59:59.122143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.315 [2024-05-15 00:59:59.122420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.315 [2024-05-15 00:59:59.122442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.315 [2024-05-15 00:59:59.122457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.315 [2024-05-15 00:59:59.126543] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.315 [2024-05-15 00:59:59.135735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.315 [2024-05-15 00:59:59.136204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.136364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:12.315 [2024-05-15 00:59:59.136385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.136414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.315 [2024-05-15 00:59:59.136432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.315 [2024-05-15 00:59:59.136705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.315 [2024-05-15 00:59:59.136987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.315 [2024-05-15 00:59:59.137010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.315 [2024-05-15 00:59:59.137025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.315 [2024-05-15 00:59:59.141131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.315 [2024-05-15 00:59:59.150174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.315 [2024-05-15 00:59:59.150753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.151009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.151053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.315 [2024-05-15 00:59:59.151077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.315 [2024-05-15 00:59:59.151360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.315 [2024-05-15 00:59:59.151634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.315 [2024-05-15 00:59:59.151656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.315 [2024-05-15 00:59:59.151675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.315 [2024-05-15 00:59:59.155765] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.315 [2024-05-15 00:59:59.164659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.315 [2024-05-15 00:59:59.165196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.165375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.165404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.315 [2024-05-15 00:59:59.165424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.315 [2024-05-15 00:59:59.165695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.315 [2024-05-15 00:59:59.165985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.315 [2024-05-15 00:59:59.166020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.315 [2024-05-15 00:59:59.166038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.315 [2024-05-15 00:59:59.170150] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.315 [2024-05-15 00:59:59.179084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.315 [2024-05-15 00:59:59.179611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.179817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.179848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.315 [2024-05-15 00:59:59.179868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.315 [2024-05-15 00:59:59.180164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.315 [2024-05-15 00:59:59.180438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.315 [2024-05-15 00:59:59.180461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.315 [2024-05-15 00:59:59.180478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.315 [2024-05-15 00:59:59.184708] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.315 [2024-05-15 00:59:59.193612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.315 [2024-05-15 00:59:59.194183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.194385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.315 [2024-05-15 00:59:59.194414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.315 [2024-05-15 00:59:59.194434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.316 [2024-05-15 00:59:59.194723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.316 [2024-05-15 00:59:59.195010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.316 [2024-05-15 00:59:59.195034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.316 [2024-05-15 00:59:59.195052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.316 [2024-05-15 00:59:59.199176] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.316 [2024-05-15 00:59:59.208190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.316 [2024-05-15 00:59:59.208776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.208973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.209001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.316 [2024-05-15 00:59:59.209022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.316 [2024-05-15 00:59:59.209300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.316 [2024-05-15 00:59:59.209574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.316 [2024-05-15 00:59:59.209596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.316 [2024-05-15 00:59:59.209627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.316 [2024-05-15 00:59:59.213738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.316 [2024-05-15 00:59:59.222700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.316 [2024-05-15 00:59:59.223206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.223377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.223404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.316 [2024-05-15 00:59:59.223422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.316 [2024-05-15 00:59:59.223705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.316 [2024-05-15 00:59:59.223988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.316 [2024-05-15 00:59:59.224012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.316 [2024-05-15 00:59:59.224028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.316 [2024-05-15 00:59:59.228122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.316 [2024-05-15 00:59:59.237295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.316 [2024-05-15 00:59:59.237834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.238068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.238099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.316 [2024-05-15 00:59:59.238120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.316 [2024-05-15 00:59:59.238399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.316 [2024-05-15 00:59:59.238677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.316 [2024-05-15 00:59:59.238700] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.316 [2024-05-15 00:59:59.238717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.316 [2024-05-15 00:59:59.242809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.316 [2024-05-15 00:59:59.251748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.316 [2024-05-15 00:59:59.252246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.252460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.252487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.316 [2024-05-15 00:59:59.252507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.316 [2024-05-15 00:59:59.252781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.316 [2024-05-15 00:59:59.252911] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.316 [2024-05-15 00:59:59.252954] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.316 [2024-05-15 00:59:59.252971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.316 [2024-05-15 00:59:59.252993] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.316 [2024-05-15 00:59:59.253005] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.316 [2024-05-15 00:59:59.253066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.316 [2024-05-15 00:59:59.253088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.316 [2024-05-15 00:59:59.253104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.316 [2024-05-15 00:59:59.253083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.316 [2024-05-15 00:59:59.253344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.316 [2024-05-15 00:59:59.253378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.316 [2024-05-15 00:59:59.257220] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.316 [2024-05-15 00:59:59.266278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.316 [2024-05-15 00:59:59.266878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.267080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.267108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.316 [2024-05-15 00:59:59.267130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.316 [2024-05-15 00:59:59.267410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.316 [2024-05-15 00:59:59.267688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.316 [2024-05-15 00:59:59.267710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.316 [2024-05-15 00:59:59.267728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.316 [2024-05-15 00:59:59.271865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.316 [2024-05-15 00:59:59.280941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.316 [2024-05-15 00:59:59.281517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.281705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.281732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.316 [2024-05-15 00:59:59.281753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.316 [2024-05-15 00:59:59.282050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.316 [2024-05-15 00:59:59.282328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.316 [2024-05-15 00:59:59.282351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.316 [2024-05-15 00:59:59.282369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.316 [2024-05-15 00:59:59.286526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.316 [2024-05-15 00:59:59.295588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.316 [2024-05-15 00:59:59.296216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.296444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.296473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.316 [2024-05-15 00:59:59.296508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.316 [2024-05-15 00:59:59.296797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.316 [2024-05-15 00:59:59.297087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.316 [2024-05-15 00:59:59.297111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.316 [2024-05-15 00:59:59.297130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.316 [2024-05-15 00:59:59.301218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.316 [2024-05-15 00:59:59.310180] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.316 [2024-05-15 00:59:59.310805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.311017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.311048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.316 [2024-05-15 00:59:59.311071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.316 [2024-05-15 00:59:59.311360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.316 [2024-05-15 00:59:59.311636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.316 [2024-05-15 00:59:59.311659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.316 [2024-05-15 00:59:59.311678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.316 [2024-05-15 00:59:59.315797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.316 [2024-05-15 00:59:59.324781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.316 [2024-05-15 00:59:59.325335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.316 [2024-05-15 00:59:59.325498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.317 [2024-05-15 00:59:59.325525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.317 [2024-05-15 00:59:59.325546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.317 [2024-05-15 00:59:59.325822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.317 [2024-05-15 00:59:59.326115] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.317 [2024-05-15 00:59:59.326139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.317 [2024-05-15 00:59:59.326157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.317 [2024-05-15 00:59:59.330252] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.317 [2024-05-15 00:59:59.339462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.317 [2024-05-15 00:59:59.339939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.317 [2024-05-15 00:59:59.340143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.317 [2024-05-15 00:59:59.340172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.317 [2024-05-15 00:59:59.340190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.317 [2024-05-15 00:59:59.340486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.317 [2024-05-15 00:59:59.340757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.317 [2024-05-15 00:59:59.340779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.317 [2024-05-15 00:59:59.340795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.317 [2024-05-15 00:59:59.344881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.317 [2024-05-15 00:59:59.354079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.317 [2024-05-15 00:59:59.354570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.317 [2024-05-15 00:59:59.354748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.317 [2024-05-15 00:59:59.354777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.317 [2024-05-15 00:59:59.354796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.317 [2024-05-15 00:59:59.355096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.317 [2024-05-15 00:59:59.355369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.317 [2024-05-15 00:59:59.355392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.317 [2024-05-15 00:59:59.355408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.317 [2024-05-15 00:59:59.359484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.317 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.317 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:23:12.317 00:59:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.317 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.317 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.317 [2024-05-15 00:59:59.368697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.317 [2024-05-15 00:59:59.369174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.317 [2024-05-15 00:59:59.369341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.317 [2024-05-15 00:59:59.369368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.317 [2024-05-15 00:59:59.369386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.317 [2024-05-15 00:59:59.369652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.317 [2024-05-15 00:59:59.369948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.317 [2024-05-15 00:59:59.369987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.317 [2024-05-15 00:59:59.370014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.576 [2024-05-15 00:59:59.374253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.576 [2024-05-15 00:59:59.383209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.576 [2024-05-15 00:59:59.383645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.383812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.383849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.576 [2024-05-15 00:59:59.383867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.576 [2024-05-15 00:59:59.384142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.576 [2024-05-15 00:59:59.384413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.576 [2024-05-15 00:59:59.384437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.576 [2024-05-15 00:59:59.384451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.576 [2024-05-15 00:59:59.388532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.576 [2024-05-15 00:59:59.394875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.576 [2024-05-15 00:59:59.397724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.576 [2024-05-15 00:59:59.398157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.398301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.398329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.576 [2024-05-15 00:59:59.398346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.576 [2024-05-15 00:59:59.398612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.576 [2024-05-15 00:59:59.398881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.576 [2024-05-15 00:59:59.398902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.576 [2024-05-15 00:59:59.398917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.576 [2024-05-15 00:59:59.403008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.576 [2024-05-15 00:59:59.412222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.576 [2024-05-15 00:59:59.412710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.412888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.412918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.576 [2024-05-15 00:59:59.412946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.576 [2024-05-15 00:59:59.413220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.576 [2024-05-15 00:59:59.413490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.576 [2024-05-15 00:59:59.413518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.576 [2024-05-15 00:59:59.413534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.576 [2024-05-15 00:59:59.417671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.576 [2024-05-15 00:59:59.426700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.576 [2024-05-15 00:59:59.427348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.427567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.427596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.576 [2024-05-15 00:59:59.427618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.576 [2024-05-15 00:59:59.427906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.576 [2024-05-15 00:59:59.428194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.576 [2024-05-15 00:59:59.428217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.576 [2024-05-15 00:59:59.428237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.576 [2024-05-15 00:59:59.432377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.576 Malloc0 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.576 [2024-05-15 00:59:59.441337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.576 [2024-05-15 00:59:59.441953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.442139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.576 [2024-05-15 00:59:59.442167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f0d0 with addr=10.0.0.2, port=4420 00:23:12.576 [2024-05-15 00:59:59.442188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f0d0 is same with the state(5) to be set 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.576 [2024-05-15 00:59:59.442467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f0d0 (9): Bad file descriptor 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.576 [2024-05-15 00:59:59.442742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.576 [2024-05-15 00:59:59.442765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.576 [2024-05-15 00:59:59.442783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.576 [2024-05-15 00:59:59.446889] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.576 [2024-05-15 00:59:59.454129] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:12.576 [2024-05-15 00:59:59.454382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.576 [2024-05-15 00:59:59.455779] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.576 00:59:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4084361 00:23:12.576 [2024-05-15 00:59:59.493881] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:22.558 00:23:22.558 Latency(us) 00:23:22.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.558 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:22.558 Verification LBA range: start 0x0 length 0x4000 00:23:22.558 Nvme1n1 : 15.02 5711.16 22.31 7179.09 0.00 9898.73 946.63 22039.51 00:23:22.558 =================================================================================================================== 00:23:22.559 Total : 5711.16 22.31 7179.09 0.00 9898.73 946.63 22039.51 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.559 rmmod nvme_tcp 00:23:22.559 rmmod nvme_fabrics 00:23:22.559 rmmod nvme_keyring 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 4084956 ']' 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 4084956 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 4084956 ']' 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 4084956 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4084956 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4084956' 00:23:22.559 killing process with pid 4084956 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 4084956 00:23:22.559 [2024-05-15 01:00:08.776493] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:22.559 01:00:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 4084956 00:23:22.559 01:00:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.559 01:00:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.559 01:00:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.559 01:00:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.559 01:00:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.559 01:00:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.559 01:00:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.559 01:00:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.457 01:00:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:24.457 00:23:24.457 real 0m21.884s 00:23:24.457 user 0m59.383s 00:23:24.457 sys 0m3.841s 00:23:24.457 01:00:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:24.457 01:00:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:24.457 ************************************ 00:23:24.457 END TEST nvmf_bdevperf 00:23:24.457 ************************************ 00:23:24.457 01:00:11 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:23:24.457 01:00:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:24.457 01:00:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:24.457 01:00:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.457 ************************************ 00:23:24.457 START TEST nvmf_target_disconnect 00:23:24.457 ************************************ 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:23:24.457 * Looking for test storage... 00:23:24.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.457 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:23:24.458 01:00:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:25.836 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:25.836 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:25.836 Found net devices under 0000:08:00.0: cvl_0_0 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:25.836 Found net devices under 0000:08:00.1: cvl_0_1 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.836 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:23:25.837 00:23:25.837 --- 10.0.0.2 ping statistics --- 00:23:25.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.837 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:23:25.837 00:23:25.837 --- 10.0.0.1 ping statistics --- 00:23:25.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.837 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.837 01:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:26.095 01:00:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:23:26.095 01:00:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:26.095 01:00:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:26.095 01:00:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:26.095 ************************************ 00:23:26.095 START TEST nvmf_target_disconnect_tc1 00:23:26.095 ************************************ 00:23:26.095 01:00:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:23:26.095 01:00:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:23:26.095 01:00:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:26.095 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.095 [2024-05-15 01:00:13.003878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.095 [2024-05-15 01:00:13.004183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.095 [2024-05-15 01:00:13.004225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208be40 with addr=10.0.0.2, port=4420 00:23:26.095 [2024-05-15 01:00:13.004266] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:26.095 [2024-05-15 01:00:13.004288] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:26.095 [2024-05-15 01:00:13.004303] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:23:26.095 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:23:26.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:23:26.095 Initializing NVMe Controllers 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:23:26.095 00:23:26.095 real 0m0.088s 00:23:26.095 user 0m0.034s 00:23:26.095 sys 0m0.053s 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.095 ************************************ 00:23:26.095 END TEST nvmf_target_disconnect_tc1 00:23:26.095 ************************************ 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:26.095 ************************************ 00:23:26.095 START TEST nvmf_target_disconnect_tc2 00:23:26.095 ************************************ 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4087922 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4087922 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4087922 ']' 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:26.095 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.095 [2024-05-15 01:00:13.124883] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:26.095 [2024-05-15 01:00:13.124983] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.354 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.354 [2024-05-15 01:00:13.190412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.354 [2024-05-15 01:00:13.308616] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.354 [2024-05-15 01:00:13.308676] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.354 [2024-05-15 01:00:13.308692] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.354 [2024-05-15 01:00:13.308705] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.354 [2024-05-15 01:00:13.308717] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.354 [2024-05-15 01:00:13.309294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:26.354 [2024-05-15 01:00:13.309434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:26.354 [2024-05-15 01:00:13.309648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:23:26.354 [2024-05-15 01:00:13.309654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.611 Malloc0 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.611 [2024-05-15 01:00:13.483868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.611 [2024-05-15 01:00:13.511860] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:26.611 [2024-05-15 01:00:13.512134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=4088032 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:23:26.611 01:00:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:26.611 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.525 01:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 4087922 00:23:28.525 01:00:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 [2024-05-15 01:00:15.540474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 [2024-05-15 01:00:15.540821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 [2024-05-15 01:00:15.541178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Read completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.525 Write completed with error (sct=0, sc=8) 00:23:28.525 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Write completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 Read completed with error (sct=0, sc=8) 00:23:28.526 starting I/O failed 00:23:28.526 [2024-05-15 01:00:15.541542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:28.526 [2024-05-15 01:00:15.541846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.542096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.542141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.542380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.542570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.542599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.542857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.543173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.543241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.543448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.543630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.543656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.543831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.544038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.544080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.544293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.544473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.544498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.544706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.545061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.545103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.545287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.545538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.545617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.545851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.546107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.546179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.546468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.546696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.546721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.546991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.547289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.547332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.547586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.547832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.547871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.548050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.548318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.548386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.548549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.548807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.548855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.549064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.549235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.549262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.549503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.549807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.549858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.550001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.550267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.550316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.550456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.550634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.550659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.550885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.551182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.551231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.551502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.551752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.551793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.551939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.552144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.552193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.552374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.552614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.552698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.552975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.553231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.526 [2024-05-15 01:00:15.553278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.526 qpair failed and we were unable to recover it. 00:23:28.526 [2024-05-15 01:00:15.553509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.553784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.553837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.553992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.554193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.554254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.554542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.554785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.554811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.555019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.555190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.555215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.555417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.555666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.555714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.555943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.556108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.556135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.556323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.556499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.556541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.556721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.557185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.557228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.557382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.557589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.557648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.557794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.557977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.558004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.558285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.558605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.558654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.558788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.558989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.559053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.559217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.559506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.559555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.559707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.559937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.559979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.560293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.560556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.560594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.560846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.561019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.561046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.561334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.561574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.561600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.561869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.562065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.562091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.562260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.562503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.562533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.562816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.563007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.563033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.563299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.563593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.563644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.563914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.564188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.564215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.564500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.564770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.564820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.565072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.565324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.565354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.565585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.565869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.565918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.566130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.566311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.566336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.566631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.566854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.566879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.567050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.567274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.567301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.567463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.567657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.567688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.527 qpair failed and we were unable to recover it. 00:23:28.527 [2024-05-15 01:00:15.567919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.527 [2024-05-15 01:00:15.568141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.568166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.568380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.568677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.568723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.569091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.569431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.569481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.569653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.569869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.569895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.570224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.570454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.570506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.570771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.571049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.571113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.571383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.571671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.571718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.572068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.572289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.572332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.572647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.572845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.572870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.573100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.573318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.573370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.573609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.573765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.573791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.574023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.574275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.574316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.574579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.574866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.574914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.575169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.575409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.575489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.575738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.576102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.576143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.576397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.576619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.576645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.576922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.577253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.577294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.577604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.577844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.577872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.578051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.578286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.578314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.578517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.578777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.578828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.578973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.579180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.579206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.579375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.579621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.579672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.579829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.580040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.580067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.580328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.580537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.528 [2024-05-15 01:00:15.580563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.528 qpair failed and we were unable to recover it. 00:23:28.528 [2024-05-15 01:00:15.580801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.581040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.581067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.581230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.581460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.581487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.581741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.581985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.582012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.582255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.582551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.582602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.582817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.583052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.583079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.583346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.583517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.583542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.583811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.584105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.584150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.584376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.584592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.584618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.584805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.585042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.585099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.585349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.585658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.585709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.585916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.586151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.586203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.586508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.586766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.586791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.587018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.587156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.587183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.587454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.587662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.587689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.587956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.588281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.588331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.588544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.588706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.588731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.589105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.589425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.589474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.798 qpair failed and we were unable to recover it. 00:23:28.798 [2024-05-15 01:00:15.589708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.590109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.798 [2024-05-15 01:00:15.590151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.590395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.590673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.590725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.590991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.591214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.591239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.591405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.591608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.591634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.591878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.592111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.592137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.592279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.592475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.592502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.592717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.592956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.593001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.593151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.593341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.593368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.593605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.593844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.593896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.594161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.594440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.594492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.594719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.594987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.595015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.595228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.595399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.595425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.595690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.595960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.596010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.596249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.596510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.596558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.596793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.597094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.597136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.597329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.597540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.597591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.597807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.598151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.598201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.598403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.598548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.598574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.598779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.599073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.599131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.599365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.599609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.599661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.599843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.600089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.600141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.600379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.600593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.600618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.600888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.601146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.601197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.601379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.601537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.601563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.601796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.602013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.602041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.602277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.602454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.602481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.602716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.602991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.603018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.603295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.603507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.603532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.603688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.603986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.604035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.604284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.604538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.604563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.604803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.605046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.605127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.605335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.605616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.605642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.799 qpair failed and we were unable to recover it. 00:23:28.799 [2024-05-15 01:00:15.605831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.799 [2024-05-15 01:00:15.606030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.606057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.606328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.606590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.606616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.606814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.607087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.607139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.607376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.607628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.607684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.607828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.608029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.608074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.608294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.608563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.608608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.608774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.609011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.609039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.609264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.609487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.609540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.609777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.610002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.610030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.610358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.610632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.610658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.610916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.611177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.611231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.611369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.611611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.611666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.611897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.612171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.612218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.612476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.612732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.612786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.612980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.613173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.613200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.613395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.613578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.613604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.613841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.614081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.614108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.614330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.614581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.614634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.614965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.615218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.615243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.615384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.615596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.615621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.615748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.616022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.616073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.616277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.616530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.616578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.616883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.617116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.617169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.617374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.617627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.617654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.617903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.618096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.618122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.618332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.618622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.618682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.618952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.619192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.619217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.619539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.619760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.619811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.620044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.620312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.620338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.620534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.620822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.620870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.621056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.621271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.621297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.621534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.621670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.621695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.800 [2024-05-15 01:00:15.621965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.622193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.800 [2024-05-15 01:00:15.622218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.800 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.622416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.622690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.622740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.622901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.623170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.623223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.623391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.623594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.623645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.623905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.624211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.624259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.624459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.624714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.624765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.625022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.625215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.625241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.625401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.625684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.625731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.626018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.626308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.626356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.626573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.626767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.626792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.627024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.627173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.627198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.627428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.627626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.627677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.627964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.628229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.628280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.628561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.628717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.628742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.629000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.629239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.629264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.629491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.629760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.629785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.630066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.630312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.630339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.630474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.630685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.630738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.630992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.631172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.631197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.631407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.631536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.631563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.631791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.632109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.632158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.632386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.632730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.632778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.633013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.633199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.633224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.633443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.633640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.633665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.633873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.634091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.634116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.634384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.634596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.634621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.634842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.635037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.635090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.635325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.635614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.635661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.635803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.636019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.636050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.636278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.636598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.636645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.636852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.637167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.637214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.637449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.637759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.637808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.801 [2024-05-15 01:00:15.638006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.638204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.801 [2024-05-15 01:00:15.638231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.801 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.638507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.638808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.638862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.639001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.639317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.639365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.639550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.639775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.639823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.640062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.640370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.640423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.640683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.640964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.641007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.641246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.641509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.641558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.641800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.642086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.642137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.642350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.642644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.642693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.642863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.643124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.643171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.643398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.643680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.643728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.643929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.644157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.644182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.644316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.644537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.644591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.644827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.645022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.645075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.645294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.645595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.645654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.645889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.646217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.646264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.646492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.646785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.646833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.647023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.647609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.647637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.647833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.648077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.648129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.648357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.648608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.648658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.648893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.649158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.649210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.649433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.649717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.649773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.650105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.650362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.650412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.650661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.650926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.650965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.651168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.651448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.651480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.651754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.652017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.652058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.652225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.652375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.652415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.652608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.652785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.802 [2024-05-15 01:00:15.652811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.802 qpair failed and we were unable to recover it. 00:23:28.802 [2024-05-15 01:00:15.653002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.653234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.653286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.653541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.653765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.653813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.654018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.654389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.654438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.654666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.654864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.654891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.655112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.655318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.655371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.655603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.655840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.655865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.656004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.656205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.656230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.656449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.656770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.656819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.657011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.657245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.657270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.657453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.657716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.657772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.658020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.658174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.658200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.658508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.658795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.658822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.658974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.659180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.659206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.659341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.659601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.659651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.659858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.660105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.660153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.660411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.660627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.660652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.660963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.661238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.661298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.661585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.661816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.661841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.661985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.662212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.662237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.662477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.662756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.662809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.663012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.663217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.663244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.663439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.663690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.663743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.664015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.664212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.664237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.664486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.664745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.664770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.664978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.665266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.665315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.665531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.665715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.665756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.665957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.666206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.666249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.666559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.666770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.666830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.666971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.667202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.667243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.667531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.667769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.667819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.668021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.668268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.668309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.668532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.668688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.668715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.803 qpair failed and we were unable to recover it. 00:23:28.803 [2024-05-15 01:00:15.668874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.803 [2024-05-15 01:00:15.669189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.669231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.669465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.669740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.669793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.670021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.670222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.670275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.670418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.670677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.670724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.670992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.671140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.671164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.671378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.671617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.671642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.671960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.672124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.672149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.672410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.672657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.672703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.672874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.673162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.673215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.673445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.673636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.673662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.673812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.674055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.674081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.674338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.674548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.674602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.674852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.675015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.675042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.675297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.675591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.675647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.675843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.675975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.676002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.676299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.676538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.676569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.676872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.677168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.677218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.677454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.677674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.677704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.677955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.678256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.678308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.678580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.678832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.678879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.679093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.679215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.679241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.679472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.679743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.679794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.680008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.680158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.680184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.680453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.680708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.680759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.681042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.681294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.681320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.681459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.681593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.681618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.681919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.682135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.682189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.682383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.682647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.682695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.682842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.683091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.683141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.683364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.683608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.683663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.683961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.684216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.684266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.684460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.684767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.684814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.685080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.685245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.685271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.685411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.685601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.804 [2024-05-15 01:00:15.685652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.804 qpair failed and we were unable to recover it. 00:23:28.804 [2024-05-15 01:00:15.685785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.686010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.686060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.686274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.686487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.686513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.686774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.686927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.686960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.687158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.687358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.687383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.687599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.687817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.687869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.688112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.688449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.688501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.688683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.688899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.688959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.689084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.689356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.689411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.689651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.689884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.689909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.690072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.690295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.690350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.690633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.690915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.690977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.691245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.691535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.691585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.691836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.692086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.692136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.692281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.692594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.692641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.692911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.693082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.693109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.693372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.693586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.693639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.693771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.693894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.693920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.694136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.694384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.694415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.694617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.694782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.694809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.695009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.695261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.695314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.695593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.695823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.695876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.696071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.696317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.696360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.696611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.696927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.696984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.697257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.697461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.697492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.697721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.697924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.697982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.698257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.698502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.698530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.698755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.699069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.699120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.699339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.699600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.699649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.699916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.700136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.700189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.700503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.700711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.700736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.700997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.701189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.701242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.701469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.701760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.805 [2024-05-15 01:00:15.701808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.805 qpair failed and we were unable to recover it. 00:23:28.805 [2024-05-15 01:00:15.702013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.702281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.702331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.702563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.702747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.702800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.702990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.703247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.703295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.703589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.703736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.703761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.704039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.704273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.704324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.704643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.704869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.704922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.705112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.705311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.705353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.705583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.705858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.705912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.706147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.706360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.706412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.706672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.706878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.706938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.707205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.707493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.707540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.707771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.708028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.708083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.708350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.708644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.708691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.708948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.709188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.709213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.709467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.709632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.709657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.709860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.710069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.710111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.710325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.710560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.710585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.710820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.711025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.711051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.711291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.711528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.711582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.711796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.712009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.712035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.712270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.712525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.712579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.712714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.712921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.712978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.713209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.713460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.713510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.713774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.713945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.713972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.714233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.714452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.714479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.714729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.714960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.715008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.715228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.715429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.715480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.806 qpair failed and we were unable to recover it. 00:23:28.806 [2024-05-15 01:00:15.715743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.715954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.806 [2024-05-15 01:00:15.715982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.716272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.716612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.716661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.716883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.717138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.717194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.717327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.717550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.717602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.717860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.718116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.718166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.718401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.718638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.718690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.718906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.719158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.719209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.719341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.719566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.719593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.719849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.720096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.720147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.720386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.720539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.720566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.720834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.721124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.721171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.721358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.721556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.721583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.721809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.722147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.722201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.722462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.722752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.722778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.723003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.723244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.723269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.723505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.723744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.723771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.723903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.724194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.724222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.724483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.724707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.724756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.724956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.725271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.725320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.725579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.725915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.725973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.726111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.726344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.726397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.726528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.726741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.726794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.727001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.727130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.727162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.727391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.727634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.727683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.727872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.728093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.728144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.728421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.728708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.728759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.728999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.729135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.729160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.729357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.729675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.729728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.729860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.730079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.730130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.730392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.730540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.730565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.730810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.731112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.731163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.731377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.731596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.731640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.807 qpair failed and we were unable to recover it. 00:23:28.807 [2024-05-15 01:00:15.731866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.807 [2024-05-15 01:00:15.732028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.732054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.732280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.732536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.732599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.732848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.733166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.733213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.733343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.733620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.733679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.733925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.734225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.734274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.734495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.734734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.734781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.734974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.735195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.735249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.735477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.735775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.735825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.736032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.736309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.736356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.736490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.736716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.736767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.737019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.737312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.737363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.737566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.737776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.737803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.738079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.738291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.738318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.738568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.738871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.738920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.739160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.739371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.739414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.739632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.739890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.739946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.740144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.740368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.740422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.740634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.740797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.740822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.741096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.741351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.741403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.741629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.741835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.741862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.742122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.742432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.742483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.742694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.742845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.742870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.743089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.743301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.743327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.743580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.743820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.743846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.744129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.744470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.744519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.744713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.744991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.745018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.745353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.745610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.745656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.745925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.746149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.746202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.746487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.746710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.746762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.746955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.747205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.747262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.747388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.747611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.747661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.747982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.748277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.748329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.748572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.748811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.808 [2024-05-15 01:00:15.748862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.808 qpair failed and we were unable to recover it. 00:23:28.808 [2024-05-15 01:00:15.749062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.749276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.749327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.749526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.749836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.749885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.750107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.750317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.750371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.750610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.750857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.750904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.751138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.751438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.751487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.751635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.751892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.751951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.752237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.752540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.752586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.752883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.753086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.753112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.753422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.753629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.753654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.753839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.753992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.754019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.754240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.754533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.754577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.754839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.755178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.755225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.755484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.755636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.755661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.755911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.756078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.756103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.756312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.756562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.756607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.756856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.757118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.757170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.757302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.757564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.757614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.757803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.758089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.758137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.758408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.758554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.758579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.758871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.759159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.759211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.759412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.759697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.759746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.760013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.760337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.760387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.760680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.760890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.760950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.761166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.761321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.761346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.761600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.761914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.761972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.762104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.762375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.762421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.762719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.763001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.763027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.763302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.763543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.763592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.763791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.764048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.764111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.764407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.764644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.764696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.764967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.765277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.765327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.765464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.765788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.765840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.766122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.766353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.766407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.809 qpair failed and we were unable to recover it. 00:23:28.809 [2024-05-15 01:00:15.766673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.766926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.809 [2024-05-15 01:00:15.766992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.767209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.767408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.767459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.767741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.768002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.768029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.768305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.768511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.768564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.768696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.768969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.769020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.769298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.769585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.769639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.769969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.770210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.770236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.770461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.770752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.770800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.771035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.771325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.771378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.771663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.771912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.771979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.772277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.772544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.772595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.772767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.772939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.772965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.773243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.773522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.773573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.773822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.774090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.774145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.774457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.774754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.774802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.775045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.775221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.775248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.775469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.775741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.775792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.776013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.776317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.776365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.776669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.776826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.776853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.777070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.777334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.777381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.777510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.777702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.777782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.777943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.778257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.778305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.778552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.778801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.778825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.779079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.779369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.779418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.779666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.779926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.779990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.780298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.780651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.780696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.780957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.781227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.781251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.781484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.781637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.781663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.810 qpair failed and we were unable to recover it. 00:23:28.810 [2024-05-15 01:00:15.781944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.782221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.810 [2024-05-15 01:00:15.782269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.782543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.782829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.782874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.783148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.783510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.783561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.783823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.784071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.784130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.784460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.784746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.784773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.785075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.785390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.785439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.785744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.785908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.785940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.786224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.786521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.786568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.786712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.786947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.786999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.787321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.787560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.787586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.787843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.788097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.788146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.788280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.788469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.788519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.788654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.788954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.789001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.789270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.789510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.789567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.789805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.790039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.790093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.790325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.790580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.790627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.790908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.791151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.791202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.791429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.791651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.791708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.791946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.792169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.792196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.792331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.792549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.792600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.792812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.793059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.793085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.793293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.793419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.793444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.793634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.793862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.793912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.794147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.794433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.794481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.794666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.794929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.794987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.795193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.795401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.795454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.795670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.795825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.795850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.796106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.796261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.796286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.796426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.796679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.796729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.796959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.797221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.797276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.797416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.797674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.797725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.797984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.798208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.798269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.798452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.798680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.798729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.798997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.799184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.799211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.799425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.799749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.799803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.800052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.800318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.811 [2024-05-15 01:00:15.800373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.811 qpair failed and we were unable to recover it. 00:23:28.811 [2024-05-15 01:00:15.800575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.800806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.800831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.801081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.801341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.801386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.801598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.801791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.801816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.802028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.802349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.802398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.802634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.802798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.802827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.803156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.803376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.803486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.803710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.803928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.803995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.804129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.804381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.804430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.804677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.804960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.805041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.805213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.805464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.805515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.805715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.805981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.806007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.806236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.806469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.806494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.806675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.806911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.807022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.807273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.807587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.807637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.807904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.808205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.808251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.808478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.808709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.808742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.808878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.809139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.809189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.809425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef6190 is same with the state(5) to be set 00:23:28.812 [2024-05-15 01:00:15.809774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.810006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.810037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.810249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.810529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.810581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.810776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.811051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.811079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.811348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.811504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.811531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.811767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.812061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.812118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.812310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.812510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.812535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.812746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.812987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.813015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.813250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.813471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.813525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.813738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.813985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.814013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.814200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.814424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.814451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.814652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.814857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.814883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.815187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.815417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.815442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.815634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.815917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.815973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.816191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.816470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.816517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.816722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.816983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.817010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.812 qpair failed and we were unable to recover it. 00:23:28.812 [2024-05-15 01:00:15.817244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.812 [2024-05-15 01:00:15.817465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.817516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.817701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.817979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.818027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.818237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.818489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.818539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.818748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.818929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.818959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.819254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.819461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.819514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.819750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.819989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.820016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.820206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.823948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.824000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.824199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.824368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.824396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.824579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.824722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.824748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.824949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.825102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.825128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.825330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.825468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.825493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.825624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.825790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.825815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.826006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.826145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.826169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.826309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.826444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.826469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.826607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.826778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.826806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.826946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.827104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.827132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.827274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.827435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.827461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.827603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.827772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.827797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.827922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.828069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.828094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.828227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.828394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.828420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.828634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.828807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.828836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.828989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.829177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.829205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.829356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.829504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.829533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.829672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.829811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.829839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.830020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.830186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.830213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.830387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.830553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.830580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.830767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.830951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.830989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.831130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.831254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.831280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.831419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.831559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.831585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.831745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.831881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.831907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.832058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.832217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.832244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.832403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.832538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.832564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.832712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.832850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.813 [2024-05-15 01:00:15.832876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.813 qpair failed and we were unable to recover it. 00:23:28.813 [2024-05-15 01:00:15.833021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.833194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.833221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.833376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.833598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.833625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.833841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.834044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.834070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.834247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.834484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.834509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.834786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.835050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.835098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.835301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.835887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.835915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.836069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.836295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.836344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.836589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.836856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.836881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.837129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.837387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.837437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.837678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.837942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.837969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.838162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.838325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.838368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.838504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.838701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.838727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.838886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.839074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.839118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.839288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.839502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.839529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.839688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.839870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.839896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.840048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.840211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.840254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.840384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.840538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.840580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.840713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.840875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.840918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.841174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.841477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.841503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.841773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.842030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.842074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.842267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.842465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.842526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.842785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.843013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.843039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.843361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.843569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.843593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.843829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.844091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.844117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.844340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.844569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.844623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.844846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.845028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.845055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.845302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.845606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.814 [2024-05-15 01:00:15.845655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:28.814 qpair failed and we were unable to recover it. 00:23:28.814 [2024-05-15 01:00:15.845814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.084 [2024-05-15 01:00:15.845998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.084 [2024-05-15 01:00:15.846025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.084 qpair failed and we were unable to recover it. 00:23:29.084 [2024-05-15 01:00:15.846211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.846524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.846551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.846750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.846988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.847015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.847218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.847421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.847475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.847670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.847942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.847968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.848168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.848420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.848446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.848615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.848817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.848845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.849026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.849244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.849296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.849527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.849768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.849794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.850017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.850241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.850268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.850523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.850737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.850793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.851015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.851208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.851249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.851404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.851616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.851663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.851880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.852116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.852166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.852399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.852643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.852668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.852851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.853085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.853141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.853325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.853511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.853555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.853790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.854022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.854064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.854250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.854492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.854517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.854694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.854924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.854977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.855243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.855443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.855485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.855621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.855827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.855876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.856130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.856378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.085 [2024-05-15 01:00:15.856431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.085 qpair failed and we were unable to recover it. 00:23:29.085 [2024-05-15 01:00:15.856642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.856904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.856929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.857120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.857331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.857383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.857513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.857768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.857794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.858009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.858223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.858274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.858448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.858704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.858760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.858990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.859115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.859141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.859384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.859604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.859654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.859784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.859993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.860023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.860209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.860335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.860360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.860534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.860741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.860765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.861036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.861322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.861369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.861602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.861763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.861788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.861992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.862209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.862262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.862428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.862626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.862651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.862840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.863051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.863077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.863261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.863469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.863524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.863685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.863814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.863839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.864054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.864249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.864274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.864501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.864658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.864683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.864924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.865145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.865199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.865439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.865680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.865730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.865971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.866133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.866175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.866433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.866632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.866680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.866873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.867070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.867122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.867332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.867546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.867597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.867726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.867929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.867995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.086 qpair failed and we were unable to recover it. 00:23:29.086 [2024-05-15 01:00:15.868192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.086 [2024-05-15 01:00:15.868498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.868554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.868754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.868944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.869029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.869241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.869501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.869527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.869748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.869968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.870000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.870228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.870474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.870499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.870727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.870965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.871007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.871177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.871334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.871359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.871563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.871804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.871830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.872045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.872175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.872199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.872326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.872608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.872633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.872896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.873045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.873070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.873332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.873577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.873630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.873846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.874097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.874143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.874345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.874602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.874653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.874871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.875022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.875050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.875283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.875540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.875595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.875729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.875969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.875995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.876203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.876524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.876575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.876786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.876993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.877019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.877301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.877430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.877455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.877697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.877957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.878001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.878233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.878509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.878559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.878774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.878918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.878962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.879105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.879255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.879282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.879534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.879768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.879821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.880087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.880337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.880361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.880580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.880780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.880834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.881068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.881268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.881294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.881536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.881795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.881846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.087 qpair failed and we were unable to recover it. 00:23:29.087 [2024-05-15 01:00:15.881972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.087 [2024-05-15 01:00:15.882153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.882177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.882413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.882697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.882722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.882922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.883153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.883177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.883359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.883603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.883656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.883898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.884118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.884168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.884408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.884674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.884721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.884990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.885191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.885216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.885406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.885535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.885562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.885749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.885971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.886018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.886148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.886285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.886311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.886477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.886677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.886702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.886940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.887083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.887109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.887276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.887496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.887547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.887757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.887960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.888006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.888281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.888513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.888569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.888801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.889049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.889100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.889378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.889662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.889710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.889903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.890154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.890205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.890431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.890674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.890727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.890960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.891233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.891281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.891428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.891617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.891695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.891826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.892037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.892100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.892316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.892622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.892647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.892855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.893016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.893041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.893260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.893494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.893536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.893742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.893969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.893994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.894158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.894359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.894401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.894553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.894793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.894818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.895025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.895237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.895262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.088 [2024-05-15 01:00:15.895533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.895781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.088 [2024-05-15 01:00:15.895836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.088 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.896023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.896252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.896276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.896522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.896758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.896815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.897006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.897258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.897311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.897510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.897820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.897872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.898028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.898238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.898289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.898565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.898819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.898873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.899059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.899214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.899239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.899468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.899724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.899769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.899912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.900125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.900177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.900421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.900660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.900711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.900891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.901117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.901169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.901394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.901595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.901647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.901861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.902029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.902056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.902255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.902539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.902588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.902765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.903027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.903056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.903344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.903593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.903640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.903837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.904062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.904089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.904272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.904543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.904585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.904721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.904921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.904972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.905166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.905322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.905349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.905641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.905961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.906012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.906236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.906509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.906557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.906772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.906984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.907011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.089 qpair failed and we were unable to recover it. 00:23:29.089 [2024-05-15 01:00:15.907286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.089 [2024-05-15 01:00:15.907534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.907584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.907730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.908030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.908082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.908409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.908677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.908731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.908915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.909163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.909191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.909468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.909717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.909764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.909957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.910190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.910243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.910430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.910671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.910722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.911046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.911209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.911234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.911463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.911772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.911829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.912079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.912350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.912399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.912669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.912910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.912943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.913179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.913436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.913484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.913754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.913963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.914006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.914143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.914457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.914522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.914743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.915018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.915067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.915270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.915439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.915464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.915703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.915993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.916020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.916221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.916444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.916496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.916630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.916845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.916898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.917186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.917336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.917361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.917599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.917831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.917884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.918077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.918356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.918402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.918682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.918960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.919006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.919137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.919266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.919291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.919524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.919833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.919877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.920079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.920332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.920390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.920610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.920960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.921031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.921224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.921488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.921540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.921729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.921872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.921898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.090 qpair failed and we were unable to recover it. 00:23:29.090 [2024-05-15 01:00:15.922176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.090 [2024-05-15 01:00:15.922435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.922484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.922671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.922953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.923003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.923282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.923550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.923608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.923887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.924128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.924181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.924371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.924654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.924701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.924903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.925150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.925176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.925357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.925627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.925679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.925961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.926197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.926251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.926437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.926675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.926728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.926965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.927171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.927197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.927458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.927714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.927761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.927996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.928168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.928193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.928334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.928556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.928607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.928856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.929076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.929127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.929375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.929662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.929715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.929846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.930074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.930126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.930386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.930665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.930719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.930893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.931138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.931189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.931443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.931669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.931716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.931979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.932296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.932357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.932569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.932818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.932872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.933095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.933326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.933377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.933509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.933780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.933839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.934025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.934235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.934262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.934546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.934786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.934837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.935062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.935335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.935382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.935514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.935794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.935846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.936076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.936345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.936392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.936667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.936904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.936961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.091 qpair failed and we were unable to recover it. 00:23:29.091 [2024-05-15 01:00:15.937174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.937427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.091 [2024-05-15 01:00:15.937479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.937738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.937946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.937974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.938109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.938286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.938341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.938616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.938924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.938986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.939256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.939497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.939556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.939786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.940094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.940141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.940407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.940664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.940715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.940997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.941264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.941309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.941577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.941830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.941883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.942111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.942241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.942266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.942525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.942727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.942782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.942997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.943251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.943305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.943533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.943781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.943831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.944025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.944238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.944296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.944431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.944607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.944661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.944917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.945211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.945259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.945469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.945716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.945771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.945990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.946271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.946320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.946456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.946681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.946732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.946982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.947263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.947313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.947561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.947760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.947809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.947958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.948226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.948273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.948542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.948774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.948828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.949078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.949321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.949373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.949602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.949874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.949924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.950218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.950508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.950558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.950691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.950920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.950982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.951221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.951455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.951507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.951755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.952023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.952071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.092 [2024-05-15 01:00:15.952285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.952496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.092 [2024-05-15 01:00:15.952551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.092 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.952684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.952816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.952844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.953137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.953432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.953478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.953608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.953837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.953887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.954095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.954294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.954346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.954602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.954915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.954967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.955242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.955529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.955577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.955771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.955992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.956020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.956232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.956540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.956591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.956848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.957082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.957109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.957425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.957679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.957730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.957922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.958061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.958086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.958264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.958532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.958580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.958716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.958960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.959004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.959179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.959428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.959477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.959724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.959960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.959988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.960200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.960486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.960538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.960796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.961034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.961060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.961297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.961636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.961699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.961907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.962072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.962098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.962290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.962570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.962618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.962751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.963001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.963052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.963375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.963530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.963557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.963770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.964010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.964038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.964236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.964530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.964583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.964798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.965051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.965130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.965384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.965578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.965603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.965825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.966114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.966140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.966368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.966553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.966578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.966818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.967036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.967062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.967260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.967483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.093 [2024-05-15 01:00:15.967511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.093 qpair failed and we were unable to recover it. 00:23:29.093 [2024-05-15 01:00:15.967775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.967928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.967960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.968171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.968495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.968543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.968814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.969084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.969133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.969349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.969582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.969635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.969906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.970193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.970239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.970431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.970735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.970785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.971055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.971386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.971437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.971695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.971974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.972025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.972271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.972521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.972573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.972714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.972940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.972968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.973203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.973437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.973486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.973745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.973986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.974013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.974252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.974409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.974434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.974621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.974873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.974927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.975273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.975452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.975479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.975682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.975903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.975955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.976171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.976471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.976525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.976849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.977097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.977144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.977370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.977644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.977697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.977830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.978076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.978125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.978394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.978726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.978774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.978980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.979242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.979293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.979549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.979822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.979871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.980133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.980289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.980314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.980511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.980773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.980799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.981056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.981259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.981311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.981589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.981856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.981910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.094 qpair failed and we were unable to recover it. 00:23:29.094 [2024-05-15 01:00:15.982107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.094 [2024-05-15 01:00:15.982349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.982399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.982685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.982905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.982959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.983198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.983415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.983443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.983644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.983915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.983972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.984295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.984567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.984618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.984817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.985057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.985105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.985391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.985655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.985702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.986025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.986292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.986342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.986633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.986879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.986905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.987049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.987314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.987361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.987631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.987881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.987909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.988155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.988424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.988480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.988739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.989005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.989036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.989355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.989634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.989686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.989966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.990265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.990320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.990535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.990812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.990859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.991013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.991226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.991278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.991602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.991836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.991863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.992107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.992327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.992381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.992589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.992763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.992789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.992915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.993182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.993232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.993513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.993822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.993869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.994018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.994269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.994317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.994570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.994869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.994919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.995116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.995402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.995451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.995691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.995952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.996004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.996228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.996490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.996542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.996845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.997142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.997193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.997492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.997758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.997810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.997996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.998130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.998157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.998476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.998772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.998820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.095 qpair failed and we were unable to recover it. 00:23:29.095 [2024-05-15 01:00:15.999075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.095 [2024-05-15 01:00:15.999316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:15.999367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:15.999618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:15.999851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:15.999912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.000157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.000313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.000339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.000626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.000847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.000873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.001117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.001403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.001452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.001747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.002021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.002101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.002364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.002618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.002676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.002905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.003211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.003262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.003539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.003832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.003878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.004112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.004367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.004394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.004609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.004858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.004884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.005091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.005289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.005315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.005528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.005816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.005863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.006013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.006291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.006335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.006559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.006791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.006847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.007102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.007391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.007441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.007715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.007964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.007990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.008247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.008517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.008564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.008869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.009178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.009231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.009517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.009804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.009860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.010085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.010387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.010438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.010638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.010899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.010960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.011207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.011535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.011583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.011836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.012147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.012196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.012484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.012721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.012773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.012977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.013242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.013291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.013559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.013795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.013847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.014089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.014391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.014438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.014662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.014894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.014920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.015155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.015418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.015468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.015729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.015894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.015921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.016207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.016466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.016491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.016717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.017021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.017067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.096 [2024-05-15 01:00:16.017296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.017642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.096 [2024-05-15 01:00:16.017690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.096 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.017890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.018144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.018171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.018459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.018669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.018694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.018993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.019266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.019318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.019587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.019870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.019918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.020058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.020328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.020374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.020590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.020918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.020972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.021168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.021384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.021410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.021647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.021963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.022010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.022287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.022574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.022621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.022821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.023069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.023125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.023370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.023594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.023619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.023895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.024209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.024258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.024491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.024733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.024768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.024988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.025304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.025355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.025622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.025856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.025883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.026151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.026472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.026521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.026788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.027062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.027120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.027252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.027523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.027570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.027796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.027960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.027987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.028260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.028495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.028547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.028796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.029040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.029066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.029248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.029549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.029599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.029867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.030103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.030135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.030396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.030642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.030695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.030919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.031222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.031273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.031577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.031827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.031879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.032124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.032382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.032433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.032674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.032875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.032922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.033207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.033460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.033485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.033669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.033889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.033948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.034251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.034556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.034602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.034821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.035084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.035140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.035364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.035658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.035709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.097 [2024-05-15 01:00:16.035918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.036250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.097 [2024-05-15 01:00:16.036300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.097 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.036614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.036779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.036804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.037001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.037202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.037228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.037361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.037637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.037686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.037928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.038135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.038188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.038334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.038554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.038612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.038843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.039069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.039096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.039312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.039657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.039707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.039920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.040126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.040173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.040387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.040633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.040659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.040907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.041221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.041276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.041549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.041856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.041882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.042122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.042399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.042447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.042642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.042903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.042991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.043157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.043303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.043328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.043611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.043827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.043852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.044078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.044403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.044454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.044771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.045024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.045075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.045312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.045538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.045566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.045851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.046081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.046136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.046340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.046475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.046501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.046740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.046955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.047006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.047233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.047517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.047565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.047752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.047972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.047999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.048143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.048435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.048484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.048705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.048985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.049031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.049251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.049539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.049564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.049794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.050001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.050028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.050262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.050586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.050635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.050847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.051139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.098 [2024-05-15 01:00:16.051192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.098 qpair failed and we were unable to recover it. 00:23:29.098 [2024-05-15 01:00:16.051486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.051704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.051729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.051865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.052143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.052196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.052429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.052591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.052616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.052829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.053050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.053099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.053426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.053643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.053694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.053978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.054225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.054275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.054516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.054812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.054861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.055084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.055385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.055434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.055661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.055885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.055945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.056224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.056476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.056529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.056783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.056996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.057023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.057238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.057521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.057573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.057708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.057920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.057958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.058199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.058508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.058556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.058752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.058992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.059020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.059161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.059422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.059448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.059733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.059997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.060025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.060228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.060498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.060549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.060790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.061025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.061052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.061196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.061428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.061477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.061709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.062006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.062057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.062277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.062509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.062534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.062670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.062819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.062846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.063013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.063156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.063182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.063332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.063460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.063486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.063700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.063871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.063898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.064120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.064360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.064417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.064612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.064765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.064792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.065029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.065181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.065208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.065484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.065779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.065827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.066129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.066427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.066476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.066614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.066871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.066922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.067201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.067446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.067496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.067770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.068036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.068084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.099 qpair failed and we were unable to recover it. 00:23:29.099 [2024-05-15 01:00:16.068339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-05-15 01:00:16.068586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.068612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.068748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.069035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.069084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.069364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.069677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.069721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.069963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.070288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.070335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.070572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.070723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.070751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.070996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.071201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.071227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.071365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.071576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.071629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.071891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.072200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.072247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.072458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.072736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.072790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.072984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.073205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.073258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.073464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.073672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.073698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.073964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.074259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.074285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.074563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.074797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.074843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.075069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.075306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.075363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.075601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.075900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.075959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.076099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.076242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.076268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.076474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.076654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.076679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.076811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.077019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.077046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.077257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.077512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.077565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.077824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.077992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.078020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.078252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.078552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.078603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.078795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.078921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.078952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.079197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.079365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.079391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.079629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.079885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.079947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.080173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.080406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.080458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.080674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.080962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.081016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.081198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.081461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.081514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.081704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.081924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.081988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.082127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.082328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.082355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.082570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.082790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.082817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.083038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.083326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.083375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.083622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.083796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.083823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.084069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.084285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.084310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.100 [2024-05-15 01:00:16.084609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.084858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-05-15 01:00:16.084910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.100 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.085065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.085283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.085332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.085709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.086007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.086035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.086224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.086497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.086550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.086751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.086987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.087038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.087301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.087461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.087486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.087682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.087990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.088017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.088268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.088565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.088618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.088893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.089105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.089131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.089264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.089497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.089547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.089754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.089979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.090007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.090220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.090389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.090414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.090603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.090805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.090830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.091062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.091331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.091378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.091506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.091768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.091815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.092064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.092313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.092338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.092491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.092778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.092832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.093104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.093324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.093349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.093570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.093814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.093867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.094005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.094227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.094276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.094550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.094842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.094890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.095171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.095327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.095354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.095599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.095884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.095938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.096156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.096438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.096494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.096759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.097003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.097053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.097262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.097424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.097451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.097726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.098003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.098030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.098282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.098536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.098561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.098818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.099187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.099237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.099432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.099716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.099762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.099962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.100221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.100267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.100537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.100831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.100879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.101128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.101398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.101445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.101588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.101875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.101938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.102252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.102541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.102593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.102779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.103001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.103031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.101 qpair failed and we were unable to recover it. 00:23:29.101 [2024-05-15 01:00:16.103256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.101 [2024-05-15 01:00:16.103533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.103584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.103809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.104158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.104205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.104434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.104667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.104713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.104990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.105281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.105329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.105586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.105871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.105921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.106182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.106388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.106413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.106548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.106746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.106774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.106997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.107238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.107268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.107519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.107756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.107809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.107954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.108269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.108319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.108534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.108724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.108751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.108996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.109258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.109304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.109485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.109768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.109820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.110079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.110298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.110351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.110628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.110895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.110950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.111151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.111372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.111422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.111668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.111818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.111843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.112078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.112322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.112378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.112633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.112925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.112977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.113231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.113524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.113573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.113805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.114087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.114133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.114375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.114609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.114661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.114928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.115151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.115178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.115374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.115586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.115636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.115892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.116148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.116202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.116343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.116590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.116642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.116873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.117120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.117146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.117409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.117634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.117686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.117963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.118240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.118287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.118527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.118772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.118798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.119022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.119344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.119394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.102 qpair failed and we were unable to recover it. 00:23:29.102 [2024-05-15 01:00:16.119627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.119864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.102 [2024-05-15 01:00:16.119914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.120063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.120187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.120212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.120474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.120818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.120883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.121207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.121496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.121544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.121851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.122031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.122058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.122294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.122581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.122628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.122889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.123133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.123186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.123465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.123643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.123668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.123858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.124015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.124042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.124279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.124487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.124538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.124854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.125015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.125043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.125240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.125505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.125550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.125818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.126101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.126150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.126421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.126632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.126680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.126824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.127019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.127083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.127361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.127584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.127632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.127767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.128107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.128156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.128377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.128645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.128693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.128974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.129209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.129262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.129429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.129685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.129733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.130050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.130369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.130420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.130674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.130961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.131006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.103 [2024-05-15 01:00:16.131139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.131391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.103 [2024-05-15 01:00:16.131439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.103 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.131691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.131986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.132012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.132321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.132638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.132688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.132972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.133182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.133208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.133440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.133649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.133677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.133951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.134255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.134308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.134600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.134905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.134967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.135104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.135411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.135460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.135778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.135940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.135968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.136254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.136543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.136593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.136720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.136966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.137013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.137289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.137448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.137473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.137717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.137944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.137970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.138203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.138359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.138386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.138648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.138896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.138922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.139068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.139396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.139446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.139695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.139951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.139977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.140160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.140450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.140498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.140772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.140992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.141020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.377 qpair failed and we were unable to recover it. 00:23:29.377 [2024-05-15 01:00:16.141237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.377 [2024-05-15 01:00:16.141516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.141568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.141755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.141996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.142052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.142300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.142639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.142687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.142880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.143118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.143172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.143447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.143721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.143769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.143988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.144269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.144317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.144459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.144692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.144749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.145005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.145278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.145331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.145541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.145822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.145871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.146121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.146286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.146313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.146577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.146874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.146921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.147240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.147501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.147550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.147742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.147966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.148012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.148145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.148380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.148431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.148656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.148882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.148909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.149191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.149452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.149505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.149724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.150004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.150033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.150250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.150566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.150620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.150871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.151052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.151105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.378 qpair failed and we were unable to recover it. 00:23:29.378 [2024-05-15 01:00:16.151384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.378 [2024-05-15 01:00:16.151668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.151718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.151902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.152093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.152121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.152254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.152554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.152602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.152811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.153099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.153146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.153412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.153655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.153704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.153978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.154131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.154157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.154430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.154583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.154609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.154855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.155126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.155174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.155416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.155619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.155644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.155869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.156200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.156250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.156512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.156724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.156748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.157000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.157199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.157250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.157379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.157630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.157681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.157915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.158069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.158102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.158273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.158408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.158433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.158574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.158745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.158780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.159765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.159971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.159998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.160301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.160524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.160587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.160742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.160971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.160996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.161205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.161373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.161397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.379 qpair failed and we were unable to recover it. 00:23:29.379 [2024-05-15 01:00:16.161602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.161856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.379 [2024-05-15 01:00:16.161880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.162019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.162246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.162299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.162609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.162874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.162925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.163153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.163462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.163515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.163644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.163922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.163992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.164281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.164557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.164613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.164855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.165081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.165135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.165382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.165615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.165644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.165836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.166059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.166107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.166326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.166601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.166650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.166887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.167108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.167133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.167332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.167562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.167586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.167774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.168016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.168068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.168285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.168439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.168463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.168693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.168924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.168986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.169193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.169411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.169458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.169658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.169838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.169863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.170065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.170332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.170384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.170589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.170848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.170904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.171152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.171402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.171456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.380 [2024-05-15 01:00:16.171678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.171929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.380 [2024-05-15 01:00:16.171961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.380 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.172180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.172438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.172489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.172714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.173005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.173031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.173158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.173350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.173404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.173643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.173921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.173990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.174121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.174352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.174402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.174590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.174875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.174928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.175131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.175319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.175346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.175548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.175796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.175853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.175979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.176208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.176260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.176510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.176738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.176789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.177004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.177275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.177328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.177457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.177664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.177715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.177961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.178210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.178261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.178490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.178754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.178807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.179063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.179214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.179241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.179473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.179689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.179716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.179916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.180183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.180234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.180461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.180704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.180755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.180998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.181212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.181265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.181451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.181644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.181671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.181801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.182035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.182085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.182291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.182529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.182582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.182822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.183051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.183139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.183352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.183606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.183666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.183796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.184008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.184065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.184260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.184503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.381 [2024-05-15 01:00:16.184556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.381 qpair failed and we were unable to recover it. 00:23:29.381 [2024-05-15 01:00:16.184681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.184900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.184954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.185183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.185492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.185542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.185730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.185923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.185956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.186190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.186420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.186470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.186739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.186946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.186972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.187186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.187448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.187497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.187715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.187999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.188047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.188289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.188523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.188571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.188705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.188914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.188981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.189218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.189451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.189501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.189631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.189857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.189913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.190129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.190335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.190365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.190500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.190636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.190661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.190846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.191031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.191058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.191308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.191451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.191476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.191707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.191950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.192000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.192223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.192373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.192399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.192589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.192862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.192908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.193104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.193256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.193281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.193492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.193643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.193667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.193864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.194157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.194205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.194423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.194639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.194664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.194904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.195141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.195168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.195298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.195419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.382 [2024-05-15 01:00:16.195444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.382 qpair failed and we were unable to recover it. 00:23:29.382 [2024-05-15 01:00:16.195725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.195876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.195901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.196039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.196171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.196196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.196432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.196671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.196697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.196826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.197032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.197116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.197355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.197506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.197533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.197766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.197993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.198044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.198238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.198499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.198545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.198780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.198989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.199014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.199286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.199497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.199549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.199769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.200042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.200096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.200286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.200412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.200437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.200644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.200910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.200971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.201102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.201351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.201400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.201666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.201898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.201958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.202183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.202425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.202476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.202604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.202892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.202950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.203080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.203304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.203357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.203581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.203788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.203813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.204046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.204298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.204350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.204560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.204811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.204860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.205051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.205315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.205363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.205559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.205789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.205814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.206003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.206141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.206168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.206410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.206704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.206750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.206952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.207100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.207127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.207432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.207707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.207754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.207978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.208208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.208253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.208384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.208649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.208700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.383 qpair failed and we were unable to recover it. 00:23:29.383 [2024-05-15 01:00:16.208949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.383 [2024-05-15 01:00:16.209196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.209252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.209436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.209684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.209735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.209952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.210238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.210286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.210555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.210787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.210839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.211094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.211339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.211393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.211648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.211965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.211991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.212203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.212494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.212543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.212762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.213002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.213028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.213166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.213422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.213471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.213682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.213911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.213942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.214138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.214293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.214319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.214561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.214812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.214864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.215073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.215284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.215336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.215552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.215774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.215823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.216076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.216308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.216333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.216593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.216841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.216887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.217085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.217327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.217378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.217638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.217861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.217913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.218058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.218274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.218321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.218536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.218782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.218831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.219039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.219280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.219307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.219512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.219815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.219863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.220131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.220363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.220415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.220607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.220736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.220762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.221039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.221331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.221390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.221520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.221734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.221785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.222003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.222263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.222312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.222565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.222719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.222745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.222963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.223194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.223247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.384 [2024-05-15 01:00:16.223482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.223688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.384 [2024-05-15 01:00:16.223742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.384 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.223871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.224117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.224175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.224394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.224651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.224704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.224835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.224969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.224996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.225124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.225399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.225440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.225694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.225902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.225927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.226219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.226454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.226479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.226684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.226981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.227007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.227222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.227471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.227524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.227752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.228070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.228118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.228379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.228627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.228680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.228867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.229122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.229171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.229430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.229667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.229713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.229983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.230107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.230132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.230336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.230601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.230651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.230776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.230986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.231012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.231144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.231356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.231404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.231613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.231824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.231876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.232120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.232325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.232350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.232561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.232776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.232830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.233025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.233221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.233275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.233564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.233788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.233836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.234050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.234204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.234229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.234422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.234666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.234719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.234965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.235237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.235289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.235544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.235822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.235869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.236007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.236138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.236163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.236289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.236528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.236576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.236791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.236915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.236956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.237177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.237428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.237482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.385 qpair failed and we were unable to recover it. 00:23:29.385 [2024-05-15 01:00:16.237672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.385 [2024-05-15 01:00:16.237885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.237946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.238189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.238482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.238531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.238755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.238927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.238963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.239171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.239392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.239445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.239685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.239907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.239940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.240228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.240463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.240518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.240739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.240963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.241009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.241262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.241526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.241577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.241838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.241994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.242021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.242283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.242546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.242593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.242797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.243042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.243092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.243360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.243687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.243736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.243966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.244236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.244286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.244552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.244801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.244855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.245076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.245328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.245353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.245659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.245981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.246006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.246271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.246533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.246583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.246853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.247145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.247197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.247379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.247588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.247640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.247884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.248236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.248286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.248496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.248716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.248770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.248984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.249195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.249220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.249350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.249620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.249666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.249884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.250220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.250267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.250522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.250759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.250783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.251051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.251326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.251374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.251559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.251802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.251849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.252073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.252301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.252353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.252584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.252818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.252843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.386 [2024-05-15 01:00:16.253106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.253349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.386 [2024-05-15 01:00:16.253398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.386 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.253625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.253909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.253964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.254224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.254427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.254479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.254762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.255051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.255117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.255297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.255582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.255630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.255812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.256031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.256089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.256365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.256616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.256670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.256925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.257168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.257193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.257423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.257684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.257734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.257976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.258175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.258228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.258355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.258578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.258630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.258848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.259032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.259057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.259282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.259533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.259558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.259826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.260096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.260145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.260279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.260579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.260627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.260880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.261036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.261061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.261243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.261479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.261505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.261764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.261978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.262004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.262228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.262588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.262638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.262957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.263166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.263218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.263471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.263709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.263760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.264036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.264297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.264344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.264560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.264812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.264864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.265113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.265359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.265385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.387 qpair failed and we were unable to recover it. 00:23:29.387 [2024-05-15 01:00:16.265625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.265860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.387 [2024-05-15 01:00:16.265909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.266050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.266255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.266282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.266482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.266637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.266662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.266897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.267184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.267232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.267489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.267747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.267800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.268056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.268332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.268386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.268580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.268872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.268917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.269190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.269489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.269538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.269813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.270050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.270097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.270315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.270571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.270596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.270805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.271053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.271098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.271325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.271574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.271624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.271867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.272176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.272228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.272355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.272554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.272599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.272877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.273112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.273138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.273363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.273655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.273698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.273923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.274184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.274235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.274522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.274736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.274787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.275027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.275247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.275302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.275582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.275874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.275927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.276257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.276515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.276567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.276820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.277070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.277115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.277350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.277644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.277692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.277910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.278175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.278228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.278388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.278609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.278633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.278888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.279049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.279075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.279205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.279326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.279350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.279580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.279799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.279852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.280047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.280348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.280400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.280619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.280902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.388 [2024-05-15 01:00:16.280959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.388 qpair failed and we were unable to recover it. 00:23:29.388 [2024-05-15 01:00:16.281091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.281379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.281427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.281711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.282003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.282055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.282314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.282595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.282647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.282877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.283138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.283193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.283376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.283634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.283688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.283956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.284207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.284260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.284503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.284750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.284807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.284997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.285127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.285153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.285370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.285654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.285709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.285914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.286235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.286292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.286562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.286820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.286872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.287082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.287349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.287405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.287619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.287848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.287895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.288089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.288365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.288414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.288658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.288870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.288897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.289128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.289379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.289431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.289702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.289860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.289885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.290023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.290248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.290299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.290589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.290796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.290820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.291039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.291224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.291276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.291467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.291703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.291758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.291995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.292250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.292275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.292585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.292784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.292810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.293013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.293260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.293286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.293534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.293756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.389 [2024-05-15 01:00:16.293781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.389 qpair failed and we were unable to recover it. 00:23:29.389 [2024-05-15 01:00:16.294015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.294202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.294229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.294477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.294729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.294781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.294985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.295224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.295276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.295510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.295703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.295728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.295957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.296105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.296130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.296419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.296681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.296705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.296943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.297188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.297241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.297505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.297756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.297806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.297946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.298197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.298247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.298441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.298577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.298602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.298817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.298985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.299012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.299251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.299466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.299517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.299742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.299996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.300021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.300266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.300562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.300608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.300823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.301064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.301090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.301333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.301594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.301644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.301913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.302176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.302224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.302493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.302705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.302754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.302994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.303123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.303148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.303322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.303609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.303663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.303868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.304109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.304135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.390 [2024-05-15 01:00:16.304356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.304662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.390 [2024-05-15 01:00:16.304710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.390 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.304956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.305226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.305277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.305503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.305739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.305786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.305992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.306248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.306296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.306540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.306772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.306826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.307036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.307194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.307219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.307351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.307561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.307616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.307858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.308068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.308096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.308333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.308537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.308562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.308801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.309026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.309074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.309333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.309561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.309641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.309772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.309989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.310014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.310157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.310377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.310402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.310530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.310744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.310792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.311000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.311191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.311238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.311427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.311703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.311759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.312004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.312131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.312156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.312339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.312516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.312541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.312770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.312981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.313035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.313214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.313419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.313471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.313719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.314015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.314040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.314241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.314394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.314418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.314613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.314839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.314890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.315114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.315373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.315421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.315609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.315791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.315816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.316014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.316242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.316303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.316590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.316846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.316904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.317112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.317381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.317429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.317558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.317755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.317780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.391 qpair failed and we were unable to recover it. 00:23:29.391 [2024-05-15 01:00:16.317908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.391 [2024-05-15 01:00:16.318056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.318082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.318215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.318367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.318392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.318555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.318819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.318867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.319078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.319289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.319342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.319475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.319693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.319742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.320008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.320226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.320282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.320531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.320782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.320831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.320969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.321194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.321244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.321444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.321632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.321657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.321821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.321972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.321997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.322124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.322357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.322435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.322670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.322798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.322822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.323038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.323280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.323333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.323598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.323847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.323901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.324159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.324399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.324446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.324640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.324866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.324923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.325173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.325385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.325437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.325576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.325778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.325803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.325987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.326112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.326137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.326347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.326555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.326607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.326834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.327050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.327076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.327308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.327454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.327480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.327751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.328009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.328066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.328324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.328478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.328502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.328728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.328977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.329027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.329270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.329538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.329588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.329775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.329901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.329926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.330164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.330451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.330500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.330732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.330992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.331045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.331230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.331476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.331528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.331730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.331972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.332023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.332149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.332340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.332404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.332635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.332956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.333005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.333134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.333348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.333399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.333624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.333826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.392 [2024-05-15 01:00:16.333876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.392 qpair failed and we were unable to recover it. 00:23:29.392 [2024-05-15 01:00:16.334019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.334160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.334185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.334380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.334625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.334675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.334918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.335119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.335177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.335417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.335643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.335668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.335796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.336007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.336059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.336236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.336508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.336561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.336790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.336950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.336975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.337105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.337326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.337378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.337611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.337868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.337918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.338136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.338356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.338381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.338603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.338866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.338917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.339139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.339374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.339419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.339552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.339748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.339799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.339938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.340153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.340203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.340443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.340722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.340770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.340923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.341189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.341242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.341370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.341617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.341663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.341796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.341999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.342025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.342216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.342462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.342513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.342706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.342995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.343021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.343244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.343397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.343422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.343700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.343982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.344039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.344243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.344495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.344548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.344813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.345099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.345124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.345331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.345475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.345501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.345732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.345995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.346054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.346276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.346459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.346484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.346614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.346745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.346771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.346914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.347101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.347126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.347333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.347541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.347567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.347737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.348001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.348052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.348238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.348432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.348459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.348738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.348976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.349026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.349269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.349492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.349542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.349753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.349989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.350015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.350243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.350542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.350567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.350781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.350988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.351013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.351203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.351437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.351494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.351694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.351962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.352007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.352210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.352484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.352532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.352723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.352924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.352976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.393 [2024-05-15 01:00:16.353174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.353418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.393 [2024-05-15 01:00:16.353471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.393 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.353733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.353887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.353914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.354123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.354364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.354409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.354661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.354913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.354972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.355151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.355391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.355415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.355623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.355899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.355954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.356200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.356428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.356453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.356663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.356820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.356845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.357008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.357260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.357314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.357543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.357753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.357780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.358020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.358279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.358331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.358524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.358678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.358703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.358834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.359065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.359117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.359326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.359559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.359610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.359744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.359983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.360009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.360284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.360529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.360580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.360769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.361028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.361081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.361301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.361534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.361591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.361835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.362020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.362074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.362201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.362431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.362480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.362710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.362956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.363004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.363265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.363476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.363501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.363715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.363950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.364008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.364145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.364410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.364459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.364696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.364907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.364941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.365127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.365386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.365434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.365619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.365835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.365881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.366013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.366278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.366327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.366455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.366749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.366797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.367029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.367282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.367330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.367515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.367740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.367793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.368041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.368284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.368337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.368575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.368834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.368879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.369029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.369344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.369393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.369605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.369864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.369912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.370120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.370379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.370428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.370671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.370815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.370840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.371011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.371251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.371300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.371546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.371789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.371841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.372081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.372359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.372411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.372694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.372953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.373005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.373211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.373462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.373508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.394 qpair failed and we were unable to recover it. 00:23:29.394 [2024-05-15 01:00:16.373748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.394 [2024-05-15 01:00:16.374009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.374035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.374170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.374384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.374408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.374644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.374795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.374820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.375024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.375167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.375191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.375419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.375709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.375756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.376026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.376317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.376369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.376503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.376766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.376815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.377098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.377342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.377390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.377660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.377963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.378006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.378214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.378419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.378471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.378716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.378973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.379031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.379307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.379592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.379642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.379773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.380000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.380027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.380264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.380570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.380627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.380879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.381196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.381245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.381481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.381719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.381772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.382020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.382205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.382230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.382422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.382700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.382750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.382875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.383087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.383144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.383356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.383577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.383604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.383787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.383945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.383972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.384230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.384459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.384510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.384711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.384956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.384982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.385201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.385364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.385389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.385545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.385704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.385729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.385861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.386024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.386051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.386194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.386391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.386446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.386683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.386890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.386939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.387230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.387440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.387467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.387690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.387963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.388008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.388288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.388443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.388470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.388667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.388928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.388998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.389229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.389463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.389488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.389703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.389994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.390034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.390327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.390618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.390671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.390862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.391129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.391182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.391331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.391594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.395 [2024-05-15 01:00:16.391643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.395 qpair failed and we were unable to recover it. 00:23:29.395 [2024-05-15 01:00:16.391834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.392017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.392044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.392339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.392646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.392696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.392917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.393101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.393155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.393342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.393604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.393656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.393951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.394226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.394278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.394488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.394745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.394803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.395026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.395218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.395269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.395450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.395686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.395741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.395959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.396249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.396300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.396522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.396772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.396820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.397032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.397279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.397330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.397537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.397826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.397876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.398017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.398278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.398328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.398544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.398821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.398865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.399005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.399249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.399301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.399560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.399814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.399866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.399996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.400251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.400300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.400512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.400723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.400777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.400987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.401196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.401221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.401430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.401644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.401697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.401906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.402198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.402245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.402437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.402693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.402733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.403008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.403260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.403308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.403526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.403760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.403786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.404017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.404275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.404330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.404554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.404810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.404866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.405032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.405295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.405346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.405568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.405804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.405830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.406067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.406330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.406380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.406557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.406799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.406827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.407041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.407273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.407322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.407501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.407709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.407762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.407996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.408212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.408261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.408499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.408747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.408801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.396 [2024-05-15 01:00:16.409004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.409245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.396 [2024-05-15 01:00:16.409301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.396 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.409529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.409742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.409767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.410035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.410240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.410265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.410540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.410783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.410833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.411037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.411270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.411334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.411520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.411745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.411794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.412023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.412285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.412336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.412593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.412838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.412887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.413099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.413377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.413428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.413656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.413873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.413927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.414116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.414324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.414353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.414574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.414837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.414864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.415050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.415333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.415386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.415577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.415813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.415868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.416066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.416230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.416256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.416455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.416654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.416681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.416874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.417185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.417238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.417461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.417677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.417702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.417981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.418226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.418279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.418511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.418802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.418851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.419056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.419210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.419240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.419474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.419729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.419779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.419983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.420234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.420283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.397 [2024-05-15 01:00:16.420417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.420656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.397 [2024-05-15 01:00:16.420706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.397 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.420993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.421222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.421271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.421441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.421610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.421644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.421982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.422179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.422206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.422400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.422653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.422712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.422968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.423187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.423241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.423480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.423757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.423806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.424020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.424351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.424399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.424640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.424895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.424920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.425174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.425429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.425478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.425709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.425974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.426026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.426177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.426385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.426411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.426676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.426989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.427016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.427167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.427468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.427524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.427659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.427885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.427944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.428168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.428388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.428438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.428728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.429013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.429069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.429204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.429441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.429496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.429772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.429989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.430015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.430255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.430486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.698 [2024-05-15 01:00:16.430543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.698 qpair failed and we were unable to recover it. 00:23:29.698 [2024-05-15 01:00:16.430830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.431065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.431112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.431327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.431605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.431655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.431889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.432140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.432185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.432432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.432684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.432710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.432983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.433188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.433236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.433487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.433726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.433779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.433938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.434082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.434109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.434337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.434600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.434657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.434851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.435050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.435116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.435366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.435591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.435643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.435833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.436049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.436104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.436343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.436624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.436683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.436979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.437219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.437277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.437493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.437733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.437782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.437995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.438130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.438156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.438388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.438658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.438707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.438891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.439114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.439141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.439378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.439637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.439691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.439850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.439987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.440013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.440246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.440525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.440570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.440810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.441062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.441093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.441308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.441554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.441611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.441851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.442105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.442155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.442381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.442674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.442723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.442995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.443129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.443156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.443446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.443600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.443626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.443827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.444053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.444100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.444311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.444587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.444636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.444951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.445215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.445269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.699 qpair failed and we were unable to recover it. 00:23:29.699 [2024-05-15 01:00:16.445426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.699 [2024-05-15 01:00:16.445657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.445710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.445921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.446074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.446101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.446292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.446573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.446622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.446851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.447037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.447065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.447197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.447488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.447539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.447784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.448028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.448111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.448304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.448540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.448595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.448881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.449189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.449237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.449496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.449735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.449788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.450024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.450187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.450213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.450442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.450699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.450754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.450888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.451098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.451126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.451326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.451545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.451572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.451827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.452194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.452243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.452442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.452712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.452764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.453009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.453207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.453260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.453493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.453784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.453833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.454056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.454184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.454209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.454482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.454627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.454654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.454847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.455074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.455129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.455392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.455618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.455672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.455894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.456180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.456229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.456497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.456758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.456810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.457088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.457383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.457447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.457683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.457990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.458017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.458151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.458368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.458417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.458653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.458927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.458994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.459213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.459518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.459571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.459779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.459942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.459968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.700 qpair failed and we were unable to recover it. 00:23:29.700 [2024-05-15 01:00:16.460121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.460300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.700 [2024-05-15 01:00:16.460340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.460473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.460660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.460689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.460861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.461016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.461042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.461221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.461400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.461425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.461577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.461751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.461776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.461913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.462074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.462114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.462267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.462435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.462475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.462632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.462822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.462862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.463026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.463204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.463243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.463402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.463566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.463605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.463787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.463971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.464006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.464171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.464350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.464377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.464545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.464742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.464781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.464979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.465129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.465167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.465303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.465474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.465501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.465671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.465831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.465871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.466022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.466208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.466248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.466396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.466555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.466595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.466748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.466973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.467000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.467131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.467279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.467318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.467472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.467637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.467676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.467887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.468054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.468081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.468245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.468416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.468443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.468628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.468783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.468822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.468998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.469172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.469197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.469371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.469531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.469571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.469724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.469891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.469916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.470069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.470224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.470262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.470424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.470552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.470577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.470737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.470896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.470940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.701 qpair failed and we were unable to recover it. 00:23:29.701 [2024-05-15 01:00:16.471091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.701 [2024-05-15 01:00:16.471248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.471286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.471498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.471784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.471833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.472027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.472294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.472348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.472550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.472844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.472902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.473142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.473350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.473376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.473568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.473758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.473811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.474037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.474323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.474372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.474546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.474749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.474775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.474986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.475171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.475222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.475463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.475724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.475777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.476000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.476142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.476168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.476391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.476714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.476767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.476907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.477059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.477088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.477361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.477601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.477652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.477862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.478098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.478148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.478377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.478603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.478657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.478854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.479081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.479135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.479347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.479536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.479576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.479784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.480095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.480146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.480310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.480511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.480564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.480694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.480907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.480967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.481173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.481424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.481480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.481701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.481962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.482002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.482237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.482466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.702 [2024-05-15 01:00:16.482521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.702 qpair failed and we were unable to recover it. 00:23:29.702 [2024-05-15 01:00:16.482721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.482870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.482895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.483042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.483286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.483325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.483563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.483822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.483873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.484080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.484386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.484436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.484654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.484905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.484968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.485195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.485444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.485492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.485681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.485813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.485841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.486073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.486342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.486394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.486679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.486907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.486976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.487229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.487476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.487501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.487711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.487966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.488010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.488250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.488548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.488599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.488865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.489171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.489228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.489458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.489691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.489717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.489989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.490205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.490257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.490457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.490721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.490771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.491011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.491263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.491289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.491524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.491837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.491892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.492087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.492394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.492442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.492686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.492930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.492996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.493134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.493414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.493463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.493691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.493952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.493978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.494207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.494506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.494557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.494784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.495021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.495078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.495271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.495555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.495601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.495827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.496083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.496136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.496327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.496565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.496613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.496745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.496945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.496989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.497183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.497343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.497374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.703 qpair failed and we were unable to recover it. 00:23:29.703 [2024-05-15 01:00:16.497609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.703 [2024-05-15 01:00:16.497916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.497975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.498139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.498367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.498419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.498607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.498733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.498758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.498897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.499109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.499162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.499303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.499559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.499620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.499892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.500141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.500192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.500326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.500549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.500602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.500854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.501023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.501049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.501312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.501581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.501628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.501823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.501978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.502004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.502199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.502379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.502432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.502632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.502894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.502950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.503083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.503367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.503414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.504425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.504569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.504596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.504853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.505634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.505668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.505926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.506212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.506261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.506490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.506619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.506644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.506797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.507004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.507030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.507889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.508164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.508219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.508498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.508778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.508826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.509014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.509847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.509878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.510147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.510439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.510464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.510595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.510808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.510855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.511129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.511420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.511465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.511695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.511988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.512013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.512288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.512498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.512549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.512795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.512968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.512996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.513267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.513499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.513524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.513795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.513952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.513990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.704 [2024-05-15 01:00:16.514328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.514492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.704 [2024-05-15 01:00:16.514520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.704 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.514816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.514999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.515038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.515270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.515527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.515573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.515789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.516034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.516061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.516279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.516545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.516596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.516809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.517052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.517113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.517323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.517577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.517616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.517877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.518090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.518142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.518353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.518614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.518661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.518924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.519194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.519242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.519460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.519732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.519760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.519904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.520122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.520162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.520297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.520524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.520574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.520709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.520893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.520956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.521145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.521303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.521328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.521517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.521739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.521794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.521981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.522188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.522235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.522497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.522731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.522785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.523045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.523265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.523320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.523453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.523709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.523758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.523964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.524258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.524297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.524568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.524834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.524884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.525048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.525311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.525359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.525568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.525815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.525869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.526111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.526352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.526406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.526611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.526861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.526913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.527052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.527238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.527291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.527476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.527757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.527810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.528081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.528332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.528382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.528693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.528877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.528902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.705 qpair failed and we were unable to recover it. 00:23:29.705 [2024-05-15 01:00:16.529109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.705 [2024-05-15 01:00:16.529433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.529489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.529770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.530050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.530078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.530374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.530608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.530660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.530941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.531239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.531290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.531514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.531669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.531697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.531915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.532106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.532160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.532441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.532724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.532771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.532901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.533122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.533174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.533423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.533655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.533707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.533919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.534156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.534202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.534398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.534610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.534640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.534831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.535090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.535141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.535300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.535491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.535554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.535815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.536081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.536134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.536395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.536647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.536699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.536971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.537222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.537272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.537407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.537677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.537725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.537866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.538133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.538181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.538352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.538599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.538634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.538860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.539142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.539193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.539438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.539683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.539742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.539966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.540139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.540165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.540424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.540774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.540832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.541060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.541283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.541309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.541443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.541584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.541643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.541895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.542181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.542234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.542472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.542739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.542788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.543040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.543189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.543217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.543374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.543566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.543593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.706 [2024-05-15 01:00:16.543783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.543987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.706 [2024-05-15 01:00:16.544044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.706 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.544177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.544409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.544464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.544674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.544942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.544985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.545186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.545436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.545492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.545704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.546689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.546720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.546951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.547199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.547227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.547357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.547592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.547651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.547889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.548056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.548083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.548271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.548503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.548588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.548746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.548993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.549020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.549189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.549356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.549383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.549534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.549746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.549790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.549981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.550895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.550941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.551082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.551955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.551987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.552192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.552920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.552958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.553224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.553472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.553498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.553647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.553776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.553833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.554112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.554369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.554397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.554555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.554708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.554735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.554873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.555003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.555073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.555221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.555367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.555394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.555601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.555768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.555793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.556047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.556246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.556300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.556460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.556684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.556732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.557024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.557243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.557269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.707 [2024-05-15 01:00:16.557471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.557660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.707 [2024-05-15 01:00:16.557687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.707 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.557820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.558035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.558124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.558278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.558451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.558505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.558650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.558896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.558951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.559158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.559398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.559478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.559629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.560587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.560618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.560850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.561014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.561042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.561358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.561573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.561648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.561875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.562130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.562175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.562362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.562593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.562655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.562977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.563217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.563266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.563552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.563747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.563773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.564015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.564231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.564319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.564512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.564833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.564886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.565191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.565486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.565549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.565789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.566010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.566037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.566266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.566482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.566510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.566744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.567015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.567043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.567264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.567430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.567457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.567654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.567883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.567950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.568826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.568966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.568995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.569129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.569261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.569325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.569458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.569611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.569638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.569780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.570012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.570046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.570268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.570464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.570545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.570710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.570848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.570875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.572115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.572325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.572379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.572530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.572865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.572894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.573111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.573370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.573432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.708 [2024-05-15 01:00:16.573635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.573820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.708 [2024-05-15 01:00:16.573847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.708 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.574047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.574250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.574292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.574423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.574578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.574621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.574804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.575010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.575038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.575178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.575314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.575341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.575480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.575656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.575697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.575891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.576108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.576136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.576306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.576467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.576510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.576716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.576913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.576988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.577226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.577478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.577507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.577696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.577951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.578038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.578171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.578407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.578439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.578676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.578867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.578895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.579076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.579297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.579324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.579456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.579689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.579717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.579954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.580183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.580237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.580395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.580598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.580639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.580822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.581030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.581072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.581429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.581624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.581676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.581862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.582101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.582129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.582295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.582504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.582532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.582784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.583009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.583035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.583188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.583334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.583361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.583568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.583757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.583850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.584128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.584387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.584472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.584710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.585011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.585039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.585228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.585446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.585500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.585728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.585978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.586019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.586169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.586372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.586412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.586702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.586963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.587004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.709 qpair failed and we were unable to recover it. 00:23:29.709 [2024-05-15 01:00:16.587233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.709 [2024-05-15 01:00:16.587399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.587438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.587599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.587885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.587946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.588138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.588356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.588380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.588650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.588815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.588844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.589058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.589218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.589245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.589423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.589645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.589698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.589829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.590013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.590043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.590277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.590502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.590550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.590781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.590986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.591013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.591185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.591314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.591339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.591556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.591845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.591896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.592111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.592279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.592319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.592506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.592698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.592738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.592887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.593136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.593194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.593367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.593568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.593592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.593744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.593968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.594009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.594206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.594430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.594482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.594612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.594767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.594807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.594996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.595134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.595166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.595316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.595529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.595570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.595759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.595996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.596046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.596199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.596423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.596472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.596669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.596857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.596922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.597107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.597288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.597345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.597544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.597688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.597713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.597914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.598191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.598217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.598401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.598621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.598671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.598881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.599169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.599223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.599360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.599539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.599567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.599754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.599944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.710 [2024-05-15 01:00:16.599970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-05-15 01:00:16.600144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.600379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.600430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.600623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.600839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.600880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.601047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.601215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.601244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.601476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.601708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.601761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.601914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.602208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.602233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.602425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.602578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.602604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.602794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.602991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.603022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.603287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.603583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.603608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.603804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.603996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.604029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.604254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.604550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.604574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.604774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.605013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.605039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.605228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.605460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.605513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.605713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.605912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.605970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.606166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.606341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.606381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.606600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.606817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.606843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.607042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.607284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.607342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.607617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.607812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.607839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.608035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.608241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.608271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.608444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.608649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.608696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.608887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.609086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.609128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.609258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.609399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.609440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.609654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.609854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.609895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.610106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.610352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.610376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.610591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.610744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.610769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.610901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.611122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.611177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-05-15 01:00:16.611355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.711 [2024-05-15 01:00:16.611607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.611660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.611862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.612058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.612112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.612342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.612552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.612593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.612742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.612954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.613007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.613189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.613341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.613366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.613559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.613740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.613788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.613985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.614206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.614255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.614478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.614664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.614704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.614942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.615173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.615197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.615382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.615599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.615657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.615811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.616022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.616048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.616287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.616524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.616578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.616815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.617104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.617130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.617357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.617500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.617525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.617737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.617920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.617953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.618138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.618318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.618360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.618556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.618738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.618775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.618988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.619201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.619228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.619500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.619730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.619783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.619912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.620100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.620153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.620431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.620649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.620702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.620840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.621088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.621142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.621363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.621570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.621609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.621865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.622167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.622198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.622484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.622692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.622737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.622929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.623149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.623199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.623331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.623528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.623581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.623770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.623992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.624035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.624230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.624473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.624498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.624689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.624905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.712 [2024-05-15 01:00:16.624971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.712 qpair failed and we were unable to recover it. 00:23:29.712 [2024-05-15 01:00:16.625208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.625468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.625497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.625763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.625998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.626024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.626198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.626421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.626445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.626658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.626956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.627004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.627191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.627396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.627434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.627716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.627919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.628004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.628199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.628494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.628527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.628784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.629055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.629082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.629272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.629564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.629612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.629746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.629986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.630013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.630290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.630451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.630477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.630763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.631032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.631058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.631237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.631527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.631551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.631802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.631982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.632024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.632153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.632378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.632428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.632631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.632820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.632857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.633104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.633301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.633327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.633530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.633753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.633779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.634002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.634215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.634269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.634586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.634801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.634840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.635108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.635288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.635333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.635530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.635686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.635713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.635938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.636192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.636243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.636552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.636809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.636861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.637047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.637292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.637347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.637620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.637982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.638011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.638226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.638480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.638527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.638824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.639029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.639056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.639242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.639427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.639454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.713 [2024-05-15 01:00:16.639587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.639790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.713 [2024-05-15 01:00:16.639815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.713 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.640029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.640168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.640195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.640381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.640656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.640681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.640918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.641207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.641237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.641431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.641632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.641658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.641927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.642141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.642193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.642392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.642539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.642566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.642824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.642983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.643010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.643144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.643277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.643304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.643446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.643659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.643708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.643966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.644252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.644305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.644532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.644810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.644862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.644999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.645275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.645323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.645485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.645677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.645727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.645929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.646252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.646298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.646491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.646754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.646805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.646995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.647180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.647205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.647477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.647745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.647796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.648007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.648233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.648273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.648527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.648829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.648854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.648985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.649279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.649334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.649559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.649831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.649884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.650019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.650186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.650227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.650453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.650710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.650758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.650924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.651217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.651298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.651509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.651761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.651812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.652102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.652403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.652432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.652562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.652793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.652841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.653069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.653342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.653367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.653551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.653856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.653881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.714 qpair failed and we were unable to recover it. 00:23:29.714 [2024-05-15 01:00:16.654033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.714 [2024-05-15 01:00:16.654816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.654854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.655058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.655265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.655313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.655536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.655847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.655887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.656023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.656222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.656264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.656538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.656778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.656829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.656979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.657200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.657226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.657364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.657533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.657577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.657790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.658003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.658058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.658259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.658421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.658465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.658658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.658877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.658917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.659186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.659393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.659437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.659649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.659833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.659858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.659991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.660830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.660860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.661069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.661357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.661382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.661574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.661808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.661835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.662037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.662216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.662260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.662433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.662615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.662656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.662811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.662990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.663033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.663187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.663379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.663403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.663615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.663827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.663874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.664030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.664314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.664358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.664502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.664662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.664687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.664885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.665070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.665096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.665331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.665594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.665619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.665822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.665966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.665993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.666282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.666526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.666551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.666781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.666913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.666948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.667182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.667402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.667432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.667648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.667802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.667828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.667968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.668156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.715 [2024-05-15 01:00:16.668182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.715 qpair failed and we were unable to recover it. 00:23:29.715 [2024-05-15 01:00:16.668439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.668695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.668723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.668916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.669186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.669238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.669454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.669672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.669726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.669894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.670071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.670098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.670230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.670453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.670500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.670770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.671007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.671033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.671278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.671485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.671539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.671752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.672004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.672035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.672337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.672546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.672572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.672838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.673052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.673108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.673316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.673594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.673622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.673756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.673887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.673914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.674134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.674359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.674413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.674645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.674891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.674919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.675157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.675404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.675455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.675661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.675896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.675923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.676097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.676297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.676348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.676491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.676686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.676739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.676871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.677056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.677105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.677462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.677593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.677618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.677836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.678034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.678061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.678305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.678490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.678517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.678704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.678947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.678990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.679194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.679337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.679365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.679528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.679678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.679704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.716 qpair failed and we were unable to recover it. 00:23:29.716 [2024-05-15 01:00:16.679969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.680234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.716 [2024-05-15 01:00:16.680287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.680546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.680819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.680873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.681036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.681242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.681288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.681504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.681725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.681752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.681883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.682144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.682198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.682424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.682629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.682669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.682922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.683160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.683185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.683430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.683614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.683642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.683865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.684189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.684251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.684469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.684703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.684744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.685007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.685163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.685190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.685384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.685513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.685538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.685767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.686014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.686043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.686245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.686448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.686497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.686675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.686870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.686898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.687093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.687337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.687391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.687581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.687781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.687835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.688016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.688146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.688171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.688404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.688676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.688747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.688951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.689124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.689167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.689421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.689640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.689671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.689886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.690155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.690201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.690467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.690707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.690758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.690896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.691062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.691090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.691286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.691544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.691593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.691766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.691900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.691927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.692172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.692447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.692495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.692709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.692862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.692887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.693032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.693221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.693272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.693533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.693686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.717 [2024-05-15 01:00:16.693714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.717 qpair failed and we were unable to recover it. 00:23:29.717 [2024-05-15 01:00:16.693986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.694164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.694191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.694383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.694580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.694606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.694789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.695007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.695064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.695331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.695635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.695680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.695983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.696137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.696163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.696422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.696646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.696675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.696909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.697211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.697266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.697542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.697754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.697817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.698008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.698163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.698207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.698432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.698610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.698653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.698839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.699098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.699151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.699427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.699672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.699724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.699854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.700052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.700107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.700265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.700489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.700545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.700749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.700890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.700917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.701081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.701243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.701286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.701504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.701752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.701779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.702025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.702249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.702296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.702450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.702656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.702709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.702952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.703212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.703259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.703496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.703728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.703781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.703977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.704192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.704238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.704433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.704710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.704765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.704948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.705145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.705198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.705419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.705608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.705659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.705789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.706038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.706093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.706264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.706464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.706490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.706683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.706870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.706924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.707172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.707327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.707352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.707532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.707777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.707818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.718 qpair failed and we were unable to recover it. 00:23:29.718 [2024-05-15 01:00:16.708010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.718 [2024-05-15 01:00:16.708188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.708241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.708456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.708639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.708665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.708879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.709064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.709096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.709315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.709539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.709592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.709726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.710015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.710068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.710280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.710536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.710588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.710719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.710851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.710877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.711078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.711323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.711352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.711503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.711757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.711810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.712004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.712244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.712295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.712486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.712720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.712776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.712967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.713186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.713227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.713420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.713621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.713674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.713888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.714166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.714219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.714425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.714680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.714731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.714858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.715088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.715142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.715359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.715522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.715563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.715697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.715916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.715989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.716130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.716309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.716349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.716477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.716691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.716739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.716983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.717226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.717266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.717400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.717632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.717682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.717919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.718086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.718113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.718344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.718579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.718629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.718773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.718951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.718980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.719222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.719508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.719559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.719685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.719823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.719851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.720014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.720224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.720253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.720517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.720754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.720796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.721001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.721177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.721206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.721350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.721481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.721508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.721716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.721979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.722020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.719 qpair failed and we were unable to recover it. 00:23:29.719 [2024-05-15 01:00:16.722167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.719 [2024-05-15 01:00:16.722419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.722474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.722667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.722974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.723020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.723156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.723385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.723425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.723675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.723959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.724010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.724144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.724337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.724388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.724566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.724764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.724790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.724999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.725237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.725289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.725425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.725711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.725763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.725944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.726197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.726251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.726498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.726680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.726707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.726903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.727095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.727123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.727264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.727492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.727544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.727782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.727991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.728018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.728252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.728525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.728576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.728815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.728996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.729023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.729200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.729430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.729480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.729625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.729759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.729784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.730014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.730247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.730301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.730540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.730741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.730791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.731012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.731234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.731289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.731425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.731656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.731710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.731978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.732252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.732312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.732533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.732771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.732796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.732948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.733189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.733240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.733375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.733530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.733555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.733740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.733988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.734015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.734191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.734360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.734387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.734553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.734740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.720 [2024-05-15 01:00:16.734784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.720 qpair failed and we were unable to recover it. 00:23:29.720 [2024-05-15 01:00:16.734954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.735125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.735163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.735353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.735598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.735623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.735785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.735969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.736010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.736169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.736337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.736377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.736508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.736659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.736700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.736854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.737019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.737062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.737225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.737394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.737434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.737596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.737772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.737814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.737963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.738117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.738158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.738327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.738478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.738519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.738668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.738865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.721 [2024-05-15 01:00:16.738890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.721 qpair failed and we were unable to recover it. 00:23:29.721 [2024-05-15 01:00:16.739043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.739193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.739231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.739384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.739547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.739577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.739833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.740069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.740118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.740261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.740500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.740552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.740808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.741017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.741056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.741293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.741500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.741525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.741675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.741857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.741883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.742046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.742196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.742221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.742379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.742530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.742557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.742697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.742851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.742891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.743057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.743229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.743267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.743419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.743584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.743623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.743775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.743955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.744001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.744164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.744354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.744381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.744515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.744659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.744698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.744834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.744964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.744991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.745153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.745306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.745345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.745500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.745669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.745708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.745837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.745995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.746035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.746189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.746335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.746361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.746531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.746690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.746729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.746924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.747196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.747241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.747376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.747608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.747664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.747882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.748108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.748135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.997 [2024-05-15 01:00:16.748436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.748655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.997 [2024-05-15 01:00:16.748680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.997 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.748925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.749167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.749208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.749404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.749655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.749707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.749892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.750107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.750160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.750382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.750654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.750706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.750997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.751209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.751263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.751395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.751523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.751548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.751717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.751959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.752009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.752190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.752375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.752418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.752631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.752785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.752810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.752951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.753152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.753205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.753441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.753701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.753752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.753955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.754188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.754230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.754448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.754665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.754706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.754896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.755160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.755215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.755455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.755709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.755761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.755896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.756137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.756195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.756425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.756690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.756737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.756865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.757028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.757092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.757318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.757551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.757602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.757796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.757974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.758001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.758178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.758462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.758512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.758720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.758979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.759005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.759227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.759445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.759492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.759678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.759870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.759912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.760146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.760365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.760394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.760622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.760845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.760871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.761042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.761240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.761291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.761442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.761684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.761748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.761986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.762261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.762313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.762511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.762668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.762695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.762913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.763113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.763156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.763338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.763502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.763543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.763776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.764003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.764030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.764260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.764566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.764593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.764847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.765046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.765100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.765339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.765570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.765622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.765837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.766053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.766108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.766296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.766426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.766457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.766699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.766943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.766984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.767173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.767394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.767436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.998 qpair failed and we were unable to recover it. 00:23:29.998 [2024-05-15 01:00:16.767647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.998 [2024-05-15 01:00:16.767785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.767811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.768045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.768293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.768348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.768568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.768767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.768793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.768992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.769122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.769149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.769391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.769618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.769670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.769889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.770087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.770130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.770342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.770620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.770669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.770803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.770938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.770970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.771177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.771439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.771485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.771682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.771900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.771952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.772098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.772380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.772430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.772665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.772858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.772899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.773099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.773330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.773383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.773575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.773720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.773746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.773888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.774128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.774177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.774308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.774561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.774614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.774877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.775161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.775216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.775444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.775705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.775765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.775996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.776258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.776311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.776510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.776741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.776789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.777000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.777286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.777335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.777531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.777794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.777849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.778111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.778323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.778374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.778594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.778827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.778879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.779154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.779368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.779422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.779644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.779910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.779971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.780192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.780472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.780536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.780671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.780900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.780953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.781207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.781470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.781518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.781708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.781928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.781967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.782244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.782545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.782573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.782817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.782979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.783007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.783190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.783473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.783498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.783746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.783990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.784017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.784276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.784504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.784544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.784791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.785016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.785060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.785262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.785488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.785540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.785685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.785924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.786004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.786176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.786408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.786463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.786712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.786995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.787022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.787226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.787381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.787408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.787669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.787901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.787963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.788222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.788373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.999 [2024-05-15 01:00:16.788400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:29.999 qpair failed and we were unable to recover it. 00:23:29.999 [2024-05-15 01:00:16.788647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.788808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.788835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.788974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.789225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.789272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.789422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.789635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.789661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.789924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.790129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.790170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.790388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.790618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.790670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.790900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.791156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.791184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.791461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.791612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.791639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.791844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.792057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.792110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.792396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.792594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.792633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.792819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.792982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.793008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.793254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.793501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.793540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.793741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.793943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.793969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.794171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.794368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.794407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.794620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.794894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.794951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.795084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.795291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.795317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.795509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.795766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.795818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.796012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.796149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.796176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.796364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.796630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.796686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.796864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.797043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.797096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.797262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.797460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.797510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.797713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.797921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.797954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.798152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.798378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.798403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.798619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.798875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.798937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.799071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.799261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.799315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.799503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.799686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.799711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.799946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.800202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.800227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.800432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.800583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.800610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.800806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.801030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.801057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.801269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.801445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.801471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.801711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.801872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.801897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.802110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.802266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.802291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.802491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.802728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.802783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.802994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.803184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.803210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.803432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.803603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.803656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.803864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.804109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.804160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.804435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.804603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.804644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.804872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.805059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.805085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.805309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.805561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.805613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.805815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.805966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.805992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.806180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.806434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.806483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.000 qpair failed and we were unable to recover it. 00:23:30.000 [2024-05-15 01:00:16.806666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.000 [2024-05-15 01:00:16.806819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.806848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.807060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.807282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.807307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.807440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.807658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.807710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.807946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.808187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.808239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.808476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.808596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.808621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.808841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.809138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.809168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.809386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.809655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.809681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.809812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.809992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.810032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.810255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.810467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.810545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.810678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.810838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.810879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.811126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.811332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.811357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.811585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.811883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.811908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.812124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.812384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.812428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.812607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.812856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.812884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.813128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.813355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.813407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.813632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.813892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.813961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.814191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.814411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.814458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.814680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.814907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.814940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.815152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.815403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.815459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.815679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.815914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.815993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.816184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.816378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.816403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.816590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.816735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.816761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.816999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.817137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.817162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.817376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.817594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.817622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.817840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.818015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.818041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.818307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.818553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.818592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.818845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.819070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.819123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.819377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.819611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.819638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.819769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.820003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.820052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.820279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.820499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.820552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.820739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.820994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.821022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.821219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.821420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.821448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.821681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.821958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.822010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.822241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.822449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.822474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.822607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.822824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.822873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.823075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.823302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.823341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.823541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.823698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.823723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.823920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.824157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.824183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.824405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.824531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.824558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.824788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.824994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.825020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.825313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.825524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.825551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.825793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.826039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.826098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.826309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.826540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.826592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.826850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.827088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.827114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.827333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.827645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.827699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.827963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.828189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.828242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.828374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.828594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.828644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.828874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.829091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.829144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.829374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.829608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.829661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.829829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.829988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.830014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.830150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.830364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.830416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.830708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.830941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.830968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.001 [2024-05-15 01:00:16.831224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.831472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.001 [2024-05-15 01:00:16.831520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.001 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.831697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.831966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.832009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.832292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.832550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.832608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.832742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.832954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.832999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.833251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.833520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.833562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.833803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.834130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.834189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.834413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.834744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.834793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.834924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.835182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.835231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.835435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.835689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.835741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.835994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.836209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.836247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.836548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.836849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.836875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.837058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.837307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.837358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.837660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.837997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.838024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.838252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.838482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.838507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.838709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.838860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.838886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.839169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.839371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.839410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.839537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.839768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.839825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.840038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.840166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.840191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.840478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.840673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.840699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.840957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.841121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.841146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.841309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.841641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.841666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.841882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.842121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.842146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.842314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.842622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.842670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.842858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.843150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.843177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.843361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.843585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.843610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.843897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.844132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.844160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.844355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.844644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.844687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.844926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.845210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.845240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.845485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.845723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.845748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.845992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.846176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.846218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.846517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.846816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.846864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.847084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.847312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.847338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.847563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.847800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.847846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.848042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.848251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.848276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.848321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef6190 (9): Bad file descriptor 00:23:30.002 [2024-05-15 01:00:16.848617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.848949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.848990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.849268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.849422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.849447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.849745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.849880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.849908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.850209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.850524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.850586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.850810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.851037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.851064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.851315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.851518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.851545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.851801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.852030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.852083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.852337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.852633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.852684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.852919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.853188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.853240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.853449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.853583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.853610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.853890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.854185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.854235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.854446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.854636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.854661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.854925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.855156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.855210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.855397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.855610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.855662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.855886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.856062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.856088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.856313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.856572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.856621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.856852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.857143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.857200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.857333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.857517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.857544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.857845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.857975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.858001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.858132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.858339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.858365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.002 qpair failed and we were unable to recover it. 00:23:30.002 [2024-05-15 01:00:16.858561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.002 [2024-05-15 01:00:16.858796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.858821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.859017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.859213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.859240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.859451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.859663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.859717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.859954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.860230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.860281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.860522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.860784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.860838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.861081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.861235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.861261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.861482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.861748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.861780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.862003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.862276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.862316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.863320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.863542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.863569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.863770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.864035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.864074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.864283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.864560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.864610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.864792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.865034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.865060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.865299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.865554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.865608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.866478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.866686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.866738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.866991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.867181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.867206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.867469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.867770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.867815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.868028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.868258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.868284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.868474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.868758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.868797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.868992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.869177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.869203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.869511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.869755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.869809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.870030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.870195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.870220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.870517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.870784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.870834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.871050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.871306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.871365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.871502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.871755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.871808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.871944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.872196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.872254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.872464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.872751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.872801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.873027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.873235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.873262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.873497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.873739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.873768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.873974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.874206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.874257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.874476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.874809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.874860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.874998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.875175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.875202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.875414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.875626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.875677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.875954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.876188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.876215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.876398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.876667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.876717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.877002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.877187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.877240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.003 qpair failed and we were unable to recover it. 00:23:30.003 [2024-05-15 01:00:16.877470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.003 [2024-05-15 01:00:16.877671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.877698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.877841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.877977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.878005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.878142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.878386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.878438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.878639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.878913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.878974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.879226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.879479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.879528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.879669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.879929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.879987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.880177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.880327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.880353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.880570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.880861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.880911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.881163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.881321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.881347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.881587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.881842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.881898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.882098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.882369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.882415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.882705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.882995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.883021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.883165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.883392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.883442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.883696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.883923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.883956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.884167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.884393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.884445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.884589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.884796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.884824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.885042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.885307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.885357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.885582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.885836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.885885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.886101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.886374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.886431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.886661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.886802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.886827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.887086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.887369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.887423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.887566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.887758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.887785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.888042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.888237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.888265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.888489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.888729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.888770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.888965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.889155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.889203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.889467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.889700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.889753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.889997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.890192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.890244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.890439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.890691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.890743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.891001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.891182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.891224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.891497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.891754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.891805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.892078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.892343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.892391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.892595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.892815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.892839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.893060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.893341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.893390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.893646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.893867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.893892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.894086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.894312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.894369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.894587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.894784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.894813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.895009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.895236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.895282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.895496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.895702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.895731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.004 qpair failed and we were unable to recover it. 00:23:30.004 [2024-05-15 01:00:16.895970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.004 [2024-05-15 01:00:16.896193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.896271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.896494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.896819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.896869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.897009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.897228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.897270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.897463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.897691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.897739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.897996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.898124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.898150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.898411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.898696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.898743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.898876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.899128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.899177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.899389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.899602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.899629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.899850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.900136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.900186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.900384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.900594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.900645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.900880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.901138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.901191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.901438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.901690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.901730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.901969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.902161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.902215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.902467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.902621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.902646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.902928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.903239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.903290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.903518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.903710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.903751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.904000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.904144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.904172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.904403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.904653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.904704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.904837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.905066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.905117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.905357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.905587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.905640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.905827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.906124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.906177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.906362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.906564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.906617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.906779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.907016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.907043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.907268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.907492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.907517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.907716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.907985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.908033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.908231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.908374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.908400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.908611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.908828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.908875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.909014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.909253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.909300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.909526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.909716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.909758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.909890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.910084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.910136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.910259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.910492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.910543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.910776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.911091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.911150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.911419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.911685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.911738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.911919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.912109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.912161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.912377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.912636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.912691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.912908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.913208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.913257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.913501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.913673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.913698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.913988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.914118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.005 [2024-05-15 01:00:16.914143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.005 qpair failed and we were unable to recover it. 00:23:30.005 [2024-05-15 01:00:16.914355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.914601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.914648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.914923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.915073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.915097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.915311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.915605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.915653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.915871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.916151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.916205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.916389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.916626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.916671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.916962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.917212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.917260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.917432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.917641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.917668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.917900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.918206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.918253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.918457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.918693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.918744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.918902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.919130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.919155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.919376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.919622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.919647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.919831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.920134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.920183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.920480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.920783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.920808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.921051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.921351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.921408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.921592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.921837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.921889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.922136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.922392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.922439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.922657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.922858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.922899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.923115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.923417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.923442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.923669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.923923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.923976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.924188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.924456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.924507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.924736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.924989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.925030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.925253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.925484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.925508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.925639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.925828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.925880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.926107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.926384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.926434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.926676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.926964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.927002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.927176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.927490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.927538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.927831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.928090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.928136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.928323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.928533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.928579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.928800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.929020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.929065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.929295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.929619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.929668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.929854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.930118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.930175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.930451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.930747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.930794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.931015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.931223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.006 [2024-05-15 01:00:16.931248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.006 qpair failed and we were unable to recover it. 00:23:30.006 [2024-05-15 01:00:16.931472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.931699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.931725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.931976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.932204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.932229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.932501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.932709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.932760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.933006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.933259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.933309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.933499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.933687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.933713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.933988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.934217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.934242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.934510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.934790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.934837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.934998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.935291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.935347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.935541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.935732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.935777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.935990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.936270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.936322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.936449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.936656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.936680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.936900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.937166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.937216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.937362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.937595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.937645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.937866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.937996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.938022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.938243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.938520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.938566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.938857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.939083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.939139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.939334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.939623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.939670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.939905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.940186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.940239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.940496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.940714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.940765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.941012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.941312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.941365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.941501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.941753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.941801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.942019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.942210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.942236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.942502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.942774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.942824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.943097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.943373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.943419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.943664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.943915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.943974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.944229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.944485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.944537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.944810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.944962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.944995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.945284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.945519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.945570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.945764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.945979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.946007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.946261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.946478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.946532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.946735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.946957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.947010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.947252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.947548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.947593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.947744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.947989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.948017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.948149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.948341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.948396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.948615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.948857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.948909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.949109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.949296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.949323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.949546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.949805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.949856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.949991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.950211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.950264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.950477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.950762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.950812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.951028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.951406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.951453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.951684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.951923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.951982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.952243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.952491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.952543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.007 qpair failed and we were unable to recover it. 00:23:30.007 [2024-05-15 01:00:16.952677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.007 [2024-05-15 01:00:16.952955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.953006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.953273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.953532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.953580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.953760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.954013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.954067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.954303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.954553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.954605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.954808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.954977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.955004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.955200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.955351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.955376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.955636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.955862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.955915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.956160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.956312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.956337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.956469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.956658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.956684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.956959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.957190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.957244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.957490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.957774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.957822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.958015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.958317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.958369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.958603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.958853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.958905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.959125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.959282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.959310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.959507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.959784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.959834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.960061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.960215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.960239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.960492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.960715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.960742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.960962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.961200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.961253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.961529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.961693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.961717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.961951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.962199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.962253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.962568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.962809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.962834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.963005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.963281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.963328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.963536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.963780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.963805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.964087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.964322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.964374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.964643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.964857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.964909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.965069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.965263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.965290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.965526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.965769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.965833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.966110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.966386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.966439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.966707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.966992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.967017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.967149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.967381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.967435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.967690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.967910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.967970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.968192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.968348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.968374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.968651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.968897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.968954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.969179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.969434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.969484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.969616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.969866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.969916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.970119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.970407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.970456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.970584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.970777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.970831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.971067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.971311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.971336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.971599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.971899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.971955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.972085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.972343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.972394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.972598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.972879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.972928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.973177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.973430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.973484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.973701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.974009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.974063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.974280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.974505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.974560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.974784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.974998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.975025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.975218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.975563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.975613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.975825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.976104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.976157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.976426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.976701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.976753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.976983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.977118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.977146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.977380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.977654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.977704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.977896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.978155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.978209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.978447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.978648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.978702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.978962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.979185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.008 [2024-05-15 01:00:16.979233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.008 qpair failed and we were unable to recover it. 00:23:30.008 [2024-05-15 01:00:16.979425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.979678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.979725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.979977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.980237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.980289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.980583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.980814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.980871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.981004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.981165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.981213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.981407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.981555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.981585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.981744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.981912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.981956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.982124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.982288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.982317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.982482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.982645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.982684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.982872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.983082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.983126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.983312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.983487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.983513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.983672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.983834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.983874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.984025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.984192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.984232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.984421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.984585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.984624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.984776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.984943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.984981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.985139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.985308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.985350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.985503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.985704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.985745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.985945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.986097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.986139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.986333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.986498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.986539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.986726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.986910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.986945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.987133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.987306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.987345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.987499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.987639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.987666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.987793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.988024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.988079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.988345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.988506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.988532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.988741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.988951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.988976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.989190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.989398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.989425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.989684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.989922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.989983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.990116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.990396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.990443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.990665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.990912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.990969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.991207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.991436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.991487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.991745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.992026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.992068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.992279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.992508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.992561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.992773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.992986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.993012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.993248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.993538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.993585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.993776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.994024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.994066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.994295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.994510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.994563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.994774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.995035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.995060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.995326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.995574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.995623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.995828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.996085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.996134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.996343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.996494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.996519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.996768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.997005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.997057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.997200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.997392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.997444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.997677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.997929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.997987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.998240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.998471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.998496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.998690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.998941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.998968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.999101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.999366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.999417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:16.999634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.999911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:16.999978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:17.000151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:17.000357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:17.000410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:17.000614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:17.000855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:17.000880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:17.001039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:17.001305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:17.001354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.009 [2024-05-15 01:00:17.001480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:17.001662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.009 [2024-05-15 01:00:17.001688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.009 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.001913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.002226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.002274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.002487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.002730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.002777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.002959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.003218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.003272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.003520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.003772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.003819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.004073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.004364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.004422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.004663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.004909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.004942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.005212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.005473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.005522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.005721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.005945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.005985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.006252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.006544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.006595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.006797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.007072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.007125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.007257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.007436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.007466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.007753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.007985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.008011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.008276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.008557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.008587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.008863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.009022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.009047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.009241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.009435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.009488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.009682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.009901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.009960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.010179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.010339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.010366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.010621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.010867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.010922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.011151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.011382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.011432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.011654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.011978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.012004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.012208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.012446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.012496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.012632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.012839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.012889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.013094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.013254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.013281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.013541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.013780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.013821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.014004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.014245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.014292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.014427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.014623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.014674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.014880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.015179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.015227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.015428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.015641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.015665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.015915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.016159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.016212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.016421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.016604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.016630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.016866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.017172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.017219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.017415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.017627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.017681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.017926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.018168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.018221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.018436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.018681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.018734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.018985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.019188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.019234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.019368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.019695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.019744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.019998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.020212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.020263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.020491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.020782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.020831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.021043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.021267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.021320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.021530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.021743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.021769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.021955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.022217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.022266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.022468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.022624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.022650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.022905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.023210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.023260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.023527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.023737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.023778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.023914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.024145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.024197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.024409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.024658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.024707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.024947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.025138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.025178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.025386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.025603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.025648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.025779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.025991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.026017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.026148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.026371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.026424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.026569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.026785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.026838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.027082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.027253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.027278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.027466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.027672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.027700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.027963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.028184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.028210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.028470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.028698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.028751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.010 qpair failed and we were unable to recover it. 00:23:30.010 [2024-05-15 01:00:17.028970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.010 [2024-05-15 01:00:17.029158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.029213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.029350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.029574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.029621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.029887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.030178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.030228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.030453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.030705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.030756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.030999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.031269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.031322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.031581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.031822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.031872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.032131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.032344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.032391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.032675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.032959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.033007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.033235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.033434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.033486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.033766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.034059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.034106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.034372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.034622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.034674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.034951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.035214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.035254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.035581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.035845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.035894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.036185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.036438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.036508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.036697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.036988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.037026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.037254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.037491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.037546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.037777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.037994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.038020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.038265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.038487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.038515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.038711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.039014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.039068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.039276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.039494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.039523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.039860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.040113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.040165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.040426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.040583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.040610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.040788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.041068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.041121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.041385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.041599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.041637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.041968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.042212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.042261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.042531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.042808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.042860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.043091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.043401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.043447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.043718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.043959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.043999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.044210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.044385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.044414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.011 [2024-05-15 01:00:17.044571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.044804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.011 [2024-05-15 01:00:17.044857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.011 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.045058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.045344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.045411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.045582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.045830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.045893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.046085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.046329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.046383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.046641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.046906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.046955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.047218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.047493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.047540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.047671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.047926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.047982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.048216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.048508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.048536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.048667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.048799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.048826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.049026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.049233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.049282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.049471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.049684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.049710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.049843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.050165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.050214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.050346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.050610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.050658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.050882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.051163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.051212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.051427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.051703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.051754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.051944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.052086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.052111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.052297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.052610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.052659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.052881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.053085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.053138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.053402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.053657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.053704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.282 [2024-05-15 01:00:17.053835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.054020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.282 [2024-05-15 01:00:17.054083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.282 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.054333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.054613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.054661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.054920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.055111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.055138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.055337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.055631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.055682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.055891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.056123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.056177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.056308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.056555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.056580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.056793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.056997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.057023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.057229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.057457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.057511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.057735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.058005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.058031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.058245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.058526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.058575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.058703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.058964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.059006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.059241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.059489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.059541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.059676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.059904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.059951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.060179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.060479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.060528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.060810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.061051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.061082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.061347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.061563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.061614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.061747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.061926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.061961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.062226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.062473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.062524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.062747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.062870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.062895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.063143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.063302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.063329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.063501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.063656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.063682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.063906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.064196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.064247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.064459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.064647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.283 [2024-05-15 01:00:17.064673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.283 qpair failed and we were unable to recover it. 00:23:30.283 [2024-05-15 01:00:17.064861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.065072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.065121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.065327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.065490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.065516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.065761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.065991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.066018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.066344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.066632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.066683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.066859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.067021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.067048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.067287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.067498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.067523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.067707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.067919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.067966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.068149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.068385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.068439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.068566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.068756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.068805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.069004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.069196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.069244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.069442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.069652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.069701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.069975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.070157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.070196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.070424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.070632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.070657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.070870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.071044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.071070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.071301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.071506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.071533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.071761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.071912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.071943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.072132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.072442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.072495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.072723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.072886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.072911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.073160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.073381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.073435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.073619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.073785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.073838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.073972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.074239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.074298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.074533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.074761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.074814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.075017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.075244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.284 [2024-05-15 01:00:17.075299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.284 qpair failed and we were unable to recover it. 00:23:30.284 [2024-05-15 01:00:17.075514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.075781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.075831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.076034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.076243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.076293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.076517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.076646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.076677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.076883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.077135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.077188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.077397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.077555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.077580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.077760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.077912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.077947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.078156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.078302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.078327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.078545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.078817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.078877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.079007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.079226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.079278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.079498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.079728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.079775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.080014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.080249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.080292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.080534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.080755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.080782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.080983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.081262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.081308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.081546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.081844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.081894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.082087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.082359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.082385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.082624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.082922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.082986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.083118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.083361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.083410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.083635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.083899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.083968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.084181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.084308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.084334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.084594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.084825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.084878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.085077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.085277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.085301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.085534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.085783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.085826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.285 qpair failed and we were unable to recover it. 00:23:30.285 [2024-05-15 01:00:17.086060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.086304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.285 [2024-05-15 01:00:17.086329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.086510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.086650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.086675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.086866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.087067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.087121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.087304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.087493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.087546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.087676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.087893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.087957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.088189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.088348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.088375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.088516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.088651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.088677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.088904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.089198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.089253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.089384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.089598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.089659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.089907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.090171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.090226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.090409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.090659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.090684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.090877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.091079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.091132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.091360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.091614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.091666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.091859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.092078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.092104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.092275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.092521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.092548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.092771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.093010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.093036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.093267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.093519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.093546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.093795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.094022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.094070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.094257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.094500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.094552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.094821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.095019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.095075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.095330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.095568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.095621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.095856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.096080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.096130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.096344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.096596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.096627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.286 qpair failed and we were unable to recover it. 00:23:30.286 [2024-05-15 01:00:17.096853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.286 [2024-05-15 01:00:17.096986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.097012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.097279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.097513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.097538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.097804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.098025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.098110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.098394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.098606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.098631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.098814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.099061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.099111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.099241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.099462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.099516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.099715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.100052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.100102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.100347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.100529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.100583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.100817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.101027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.101053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.101241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.101433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.101459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.101670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.101892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.101919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.102128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.102390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.102415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.102631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.102827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.102874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.103110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.103382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.103432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.103637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.103859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.103886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.104026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.104253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.104303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.104517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.104668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.104695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.104826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.105045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.105125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.287 qpair failed and we were unable to recover it. 00:23:30.287 [2024-05-15 01:00:17.105393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.287 [2024-05-15 01:00:17.105633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.105660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.105873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.106068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.106120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.106316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.106554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.106607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.106833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.107029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.107082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.107306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.107467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.107494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.107700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.107849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.107874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.108052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.108312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.108336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.108614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.108764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.108789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.109020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.109282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.109312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.109560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.109811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.109853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.109990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.110250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.110305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.110530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.110841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.110866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.111027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.111315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.111363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.111546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.111787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.111814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.112067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.112315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.112365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.112623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.112907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.112980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.113191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.113342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.113367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.113577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.113822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.113874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.289 [2024-05-15 01:00:17.114135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.114350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.289 [2024-05-15 01:00:17.114403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.289 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.114633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.114846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.114872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.115005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.115265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.115312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.115524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.115759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.115812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.116044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.116266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.116292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.116521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.116767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.116792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.117014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.117288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.117315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.117511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.117666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.117695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.117833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.118060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.118109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.118293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.118455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.118482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.118722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.118869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.118895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.119150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.119424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.119476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.119613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.119896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.119951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.120228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.120490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.120545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.120770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.120904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.120937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.121188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.121439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.121490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.121719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.122012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.122065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.122276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.122506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.122533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.122754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.123048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.123105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.123364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.123511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.123573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.123814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.123977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.124005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.124232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.124422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.124475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.124745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.124999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.125050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.125247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.125484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.125538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.125782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.125942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.125970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.126260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.126517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.126565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.126799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.127118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.127146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.127336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.127540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.127593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.127783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.128073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.128125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.128358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.128607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.128634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.128820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.129041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.129069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.129223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.129451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.129538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.290 qpair failed and we were unable to recover it. 00:23:30.290 [2024-05-15 01:00:17.129760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.129924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.290 [2024-05-15 01:00:17.129957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.130185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.130459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.130509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.130733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.130879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.130904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.131045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.131173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.131198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.131456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.131708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.131735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.131968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.132223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.132251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.132452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.132670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.132716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.132988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.133296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.133345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.133574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.133830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.133918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.134215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.134399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.134426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.134667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.134920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.134954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.135161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.135293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.135319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.135483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.135711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.135762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.135961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.136191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.136248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.136375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.136498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.136558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.136763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.137004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.137032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.137225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.137451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.137503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.137753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.138015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.138043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.138213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.138477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.138502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.138694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.138912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.138969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.139191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.139434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.139485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.139714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.139995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.140043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.140229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.140412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.140437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.140650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.140904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.140962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.141150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.141387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.141413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.141662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.141831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.141861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.142068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.142361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.142410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.142595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.142806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.142831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.143030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.143266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.143319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.143504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.143733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.143787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.144022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.144212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.144239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.144368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.144592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.144637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.144770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.145014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.145039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.145266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.145500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.145525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.291 qpair failed and we were unable to recover it. 00:23:30.291 [2024-05-15 01:00:17.145669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.145858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.291 [2024-05-15 01:00:17.145884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.146023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.146259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.146306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.146523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.146801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.146865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.147055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.147291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.147342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.147544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.147685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.147710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.147902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.148109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.148164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.148381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.148564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.148588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.148787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.148944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.148969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.149183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.149369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.149408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.149627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.149922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.149986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.150197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.150436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.150496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.150692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.150944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.150970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.151182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.151313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.151338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.151561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.151710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.151735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.151899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.152139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.152193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.152380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.152643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.152669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.152874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.153129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.153179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.153352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.153633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.153658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.153878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.154070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.154121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.154335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.154555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.154609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.154799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.155039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.155091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.155225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.155413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.155461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.155707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.155959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.156009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.156226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.156391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.156432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.156660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.156818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.156845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.157117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.157357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.157411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.157546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.157753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.157807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.157944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.158195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.158244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.158482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.158711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.158752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.158956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.159209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.159236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.159422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.159636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.159690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.159849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.160045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.160071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.160321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.160551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.160606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.160834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.161109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.161157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.292 [2024-05-15 01:00:17.161326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.161539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.292 [2024-05-15 01:00:17.161580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.292 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.161777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.161985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.162012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.162224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.162444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.162469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.162683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.162898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.162923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.163192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.163407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.163433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.163591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.163740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.163765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.164010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.164195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.164248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.164457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.164664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.164689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.164887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.165096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.165149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.165341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.165478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.165504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.165668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.165993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.166020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.166220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.166519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.166567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.166802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.167102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.167130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.167340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.167629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.167679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.167879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.168105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.168146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.168399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.168643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.168695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.168825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.168984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.169035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.169281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.169520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.169544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.169742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.169897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.169922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.170150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.170293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.170317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.170518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.170766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.170791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.171015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.171262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.171314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.171448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.171575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.171600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.171819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.171969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.171996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.172125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.172392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.172448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.172680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.172959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.173007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.173207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.173434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.173458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.173670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.173889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.173956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.174245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.174448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.174513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.174718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.174986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.293 [2024-05-15 01:00:17.175013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.293 qpair failed and we were unable to recover it. 00:23:30.293 [2024-05-15 01:00:17.175247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.175471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.175524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.175754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.175979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.176004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.176235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.176443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.176467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.176606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.176764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.176805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.177083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.177340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.177395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.177656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.177888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.177915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.178164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.178370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.178394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.178570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.178824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.178878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.179073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.179316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.179368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.179620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.179829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.179855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.180032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.180175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.180200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.180455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.180707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.180733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.180961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.181236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.181285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.181472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.181650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.181675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.181907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.182204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.182251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.182531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.182684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.182709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.182991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.183149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.183203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.183470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.183764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.183812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.184041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.184332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.184379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.184515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.184732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.184778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.184966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.185164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.185215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.185428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.185682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.185733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.185988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.186199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.186225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.186355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.186627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.186679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.186808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.186941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.186967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.187168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.187324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.187349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.187478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.187623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.187648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.187811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.187962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.187988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.188178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.188401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.188453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.188705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.188991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.189017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.189247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.189447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.189474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.189685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.189994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.190045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.190276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.190401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.190426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.190662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.191016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.191063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.294 [2024-05-15 01:00:17.191302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.191528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.294 [2024-05-15 01:00:17.191581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.294 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.191805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.192056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.192082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.192298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.192510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.192536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.192780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.193091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.193141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.193270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.193450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.193475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.193742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.194023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.194049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.194293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.194575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.194624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.194806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.195074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.195100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.195358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.195660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.195712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.195921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.196173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.196228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.196492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.196744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.196806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.197068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.197341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.197391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.197636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.197791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.197817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.198085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.198348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.198373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.198595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.198807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.198832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.199055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.199203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.199233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.199502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.199744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.199768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.199989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.200283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.200333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.200517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.200738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.200762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.201071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.201353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.201404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.201662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.201807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.201832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.202046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.202282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.202331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.202492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.202760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.202807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.203136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.203382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.203423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.203620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.203868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.203918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.204150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.204469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.204517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.204752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.204969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.204995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.205322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.205576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.205616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.205803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.206107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.206155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.206360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.206671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.206718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.206849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.207040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.207084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.207214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.207353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.207379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.207649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.207922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.207975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.208196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.208418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.208472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.295 [2024-05-15 01:00:17.208715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.208991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.295 [2024-05-15 01:00:17.209017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.295 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.209269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.209514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.209568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.209703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.209880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.209926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.210154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.210367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.210421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.210609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.210813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.210863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.210997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.211180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.211205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.211428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.211654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.211704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.211972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.212235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.212293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.212427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.212705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.212733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.212962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.213211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.213254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.213507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.213654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.213679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.213893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.214147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.214203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.214418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.214680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.214705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.214982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.215194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.215236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.215506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.215711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.215738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.215930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.216206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.216257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.216401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.216585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.216612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.216854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.217100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.217126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.217416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.217652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.217704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.217955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.218155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.218205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.218408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.218563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.218589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.218790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.219067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.219124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.219419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.219667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.219719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.219961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.220207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.220232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.220479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.220727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.220776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.220966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.221204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.221253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.221462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.221765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.221814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.222025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.222158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.222185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.222440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.222691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.222743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.222872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.223140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.223195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.223378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.223607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.223659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.223892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.224183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.224238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.224391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.224660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.224707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.224953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.225208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.225260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.225526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.225784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.296 [2024-05-15 01:00:17.225836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.296 qpair failed and we were unable to recover it. 00:23:30.296 [2024-05-15 01:00:17.226062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.226304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.226358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.226635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.226873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.226900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.227034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.227212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.227266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.227486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.227755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.227817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.228089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.228346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.228373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.228618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.228769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.228794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.229081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.229329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.229382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.229631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.229845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.229898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.230156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.230459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.230510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.230725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.231007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.231052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.231285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.231513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.231565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.231693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.231877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.231940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.232126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.232404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.232450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.232674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.232892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.232917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.233163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.233450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.233502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.233713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.233876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.233903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.234124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.234380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.234430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.234628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.234892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.234956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.235094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.235338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.235386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.235574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.235775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.235827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.236023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.236319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.236365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.236595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.236861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.236916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.237212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.237459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.237511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.237793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.238013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.238039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.238274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.238520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.238577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.238835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.239047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.239090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.239313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.239507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.239556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.239689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.239916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.239983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.240132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.240374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.240401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.240637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.240846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.297 [2024-05-15 01:00:17.240898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.297 qpair failed and we were unable to recover it. 00:23:30.297 [2024-05-15 01:00:17.241048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.241337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.241388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.241656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.241863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.241887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.242089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.242341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.242394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.242627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.242948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.243002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.243191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.243414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.243458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.243702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.243896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.243953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.244239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.244501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.244549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.244738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.244996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.245079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.245329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.245531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.245584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.245801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.246067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.246123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.246328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.246576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.246600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.246849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.247079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.247137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.247392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.247619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.247667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.247850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.248043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.248087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.248256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.248511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.248558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.248779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.249008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.249035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.249215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.249369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.249394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.249649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.249854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.249900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.250070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.250304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.250356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.250539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.250802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.250849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.251047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.251312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.251362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.251599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.251897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.251950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.252166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.252424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.252481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.252678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.252885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.252909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.253121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.253420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.253446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.253635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.253834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.253889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.254173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.254380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.254405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.254635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.254801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.254828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.255064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.255375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.255426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.255676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.255930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.255982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.256205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.256363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.256388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.256622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.256859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.256910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.257124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.257404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.257453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.257643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.257896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.257952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.298 qpair failed and we were unable to recover it. 00:23:30.298 [2024-05-15 01:00:17.258167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.298 [2024-05-15 01:00:17.258429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.258478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.258695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.258926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.258981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.259263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.259492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.259546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.259733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.259995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.260021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.260159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.260419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.260466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.260677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.260840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.260866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.261064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.261374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.261422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.261617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.261844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.261898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.262096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.262352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.262377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.262585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.262835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.262888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.263177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.263398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.263425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.263630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.263876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.263937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.264167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.264376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.264402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.264639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.264913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.264974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.265232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.265394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.265419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.265672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.265954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.266003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.266223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.266496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.266541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.266783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.267019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.267085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.267300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.267561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.267608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.267867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.268174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.268225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.268358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.268588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.268636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.268886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.269131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.269180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.269467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.269723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.269774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.269966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.270196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.270252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.270467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.270733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.270787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.271018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.271177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.271203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.271434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.271648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.271706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.271946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.272209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.272260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.272535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.272750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.272776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.272998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.273238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.273291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.273523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.273686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.273712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.273950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.274247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.274299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.274509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.274758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.274783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.274968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.275255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.275306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.275445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.275674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.275726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.275953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.276108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.276135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.299 qpair failed and we were unable to recover it. 00:23:30.299 [2024-05-15 01:00:17.276405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.299 [2024-05-15 01:00:17.276613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.276639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.276862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.277035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.277061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.277291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.277555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.277606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.277817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.278108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.278155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.278378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.278606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.278631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.278851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.279107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.279161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.279300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.279512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.279564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.279786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.280014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.280079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.280326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.280553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.280579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.280870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.281124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.281181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.281452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.281751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.281799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.281977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.282232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.282290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.282518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.282680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.282707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.282962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.283165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.283219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.283362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.283566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.283618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.283869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.284115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.284168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.284379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.284554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.284578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.284822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.285023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.285075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.285308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.285467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.285499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.285747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.285954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.285980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.286186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.286497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.286545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.286699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.286908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.286968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.287296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.287537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.287591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.287862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.288150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.288197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.288473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.288696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.288749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.288955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.289169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.289218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.289488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.289736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.289761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.289991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.290245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.290292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.290426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.290658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.290688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.290957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.291223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.291278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.291409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.291671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.291723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.291925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.292097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.292122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.300 [2024-05-15 01:00:17.292322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.292589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.300 [2024-05-15 01:00:17.292644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.300 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.292921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.293234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.293284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.293498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.293626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.293651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.293836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.294020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.294047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.294374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.294634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.294686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.294815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.295081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.295138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.295398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.295663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.295720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.296023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.296252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.296306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.296538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.296748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.296773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.296955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.297178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.297227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.297469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.297746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.297806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.298024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.298286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.298335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.298478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.298700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.298725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.298947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.299076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.299101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.299372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.299673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.299728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.299920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.300124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.300164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.300346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.300572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.300622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.300840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.301000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.301027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.301289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.301587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.301638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.301815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.302042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.302092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.302308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.302587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.302637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.302765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.302984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.303010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.303220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.303484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.303538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.303805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.304015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.304042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.304282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.304546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.304572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.304784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.305005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.305055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.305317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.305623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.305671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.305981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.306245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.306295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.306579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.306887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.306954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.307151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.307337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.307363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.307598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.307841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.307896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.308106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.308358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.308384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.308575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.308799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.308848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.309036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.309318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.309368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.309561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.309839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.309888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.310135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.310437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.310494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.310758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.311017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.311067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.311298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.311571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.311632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.301 qpair failed and we were unable to recover it. 00:23:30.301 [2024-05-15 01:00:17.311855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.301 [2024-05-15 01:00:17.312094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.312146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.312414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.312720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.312772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.313037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.313300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.313357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.313632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.313866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.313920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.314160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.314445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.314470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.314669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.314995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.315022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.315268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.315534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.315559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.315757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.315999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.316049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.316290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.316548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.316574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.316811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.317042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.317069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.317291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.317567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.317616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.317905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.318122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.318149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.318424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.318661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.318716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.318954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.319257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.319308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.319438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.319696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.319723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.319995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.320244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.320291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.320423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.320620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.320645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.320929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.321181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.321239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.321507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.321757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.321804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.322006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.322275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.322329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.322584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.322831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.322886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.323147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.323363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.323388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.323632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.323913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.323971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.324116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.324338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.324364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.324512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.324726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.324776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.325009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.325140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.325165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.325398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.325663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.325711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.325971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.326212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.326237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.326426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.326643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.302 [2024-05-15 01:00:17.326691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.302 qpair failed and we were unable to recover it. 00:23:30.302 [2024-05-15 01:00:17.326968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.575 [2024-05-15 01:00:17.327239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.327266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.327509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.327800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.327853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.328028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.328292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.328346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.328615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.328981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.329026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.329258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.329520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.329565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.329745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.330003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.330030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.330253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.330501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.330526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.330762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.330998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.331024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.331231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.331461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.331513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.331640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.331857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.331910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.332111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.332326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.332353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.332512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.332745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.332799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.332939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.333068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.333093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.333328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.333563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.333612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.333832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.333991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.334018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.334153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.334336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.334398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.334672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.334899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.334926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.335130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.335311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.335338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.335529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.335769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.335822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.336067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.336298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.336352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.336580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.336835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.336887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.337118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.337277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.337303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.337436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.337636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.337691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.337877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.338180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.338231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.338466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.338687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.338713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.338987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.339220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.339271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.339540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.339792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.339844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.576 [2024-05-15 01:00:17.340037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.340254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.576 [2024-05-15 01:00:17.340308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.576 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.340449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.340638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.340664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.340794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.340920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.340950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.341196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.341504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.341553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.341817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.341985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.342011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.342265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.342503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.342553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.342795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.343040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.343104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.343243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.343506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.343556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.343787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.344001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.344027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.344178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.344398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.344449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.344640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.344833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.344882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.345086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.345298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.345361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.345615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.345767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.345792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.346036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.346307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.346361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.346587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.346880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.346929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.347157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.347438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.347487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.347690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.347947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.347997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.348212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.348495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.348546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.348783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.348993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.349019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.349251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.349414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.349440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.349699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.349974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.350000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.350236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.350461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.350512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.350647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.350842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.350893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.351134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.351434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.351483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.351704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.351913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.351957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.352230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.352470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.352523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.352662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.352885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.352943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.353167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.353471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.353519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.577 qpair failed and we were unable to recover it. 00:23:30.577 [2024-05-15 01:00:17.353734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.577 [2024-05-15 01:00:17.354004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.354030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.354259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.354518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.354567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.354784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.355041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.355094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.355230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.355438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.355487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.355712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.355976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.356002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.356141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.356369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.356417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.356675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.356957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.357000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.357209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.357363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.357389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.357598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.357812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.357838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.358117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.358376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.358427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.358661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.358906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.358966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.359236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.359481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.359533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.359719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.359956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.359996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.360268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.360478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.360503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.360725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.360970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.360996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.361227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.361386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.361418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.361622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.361812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.361838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.362067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.362232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.362258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.362453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.362713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.362763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.362997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.363228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.363292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.363509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.363778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.363824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.364112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.364400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.364449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.364753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.365046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.365095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.365363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.365599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.365651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.365783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.365986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.366014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.366342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.366628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.366684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.366874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.367164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.578 [2024-05-15 01:00:17.367218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.578 qpair failed and we were unable to recover it. 00:23:30.578 [2024-05-15 01:00:17.367431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.367600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.367628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.367788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.367998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.368026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.368273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.368526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.368575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.368844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.369124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.369178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.369319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.369505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.369558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.369801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.370032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.370059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.370327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.370632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.370683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.370874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.371170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.371217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.371449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.371742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.371797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.372003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.372249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.372302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.372498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.372801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.372851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.373162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.373507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.373554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.373767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.374009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.374058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.374315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.374582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.374630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.374870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.375062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.375110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.375385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.375666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.375719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.375989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.376286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.376335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.376591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.376950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.377007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.377223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.377516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.377568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.377880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.378200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.378252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.378536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.378782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.378835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.379158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.379452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.379504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.379734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.379993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.380020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.380322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.380609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.380657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.380927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.381201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.381249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.381568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.381720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.381746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.579 qpair failed and we were unable to recover it. 00:23:30.579 [2024-05-15 01:00:17.382074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.382358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.579 [2024-05-15 01:00:17.382407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.382655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.382863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.382916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.383163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.383457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.383507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.383786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.384015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.384043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.384270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.384535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.384589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.384896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.385188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.385239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.385498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.385738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.385763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.386005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.386316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.386364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.386667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.386970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.387029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.387228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.387427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.387454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.387687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.387897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.387922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.388125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.388356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.388381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.388627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.388990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.389017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.389329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.389487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.389514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.389786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.390138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.390188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.390399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.390583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.390609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.390866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.391124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.391168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.391493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.391780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.391828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.392065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.392427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.392474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.392701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.392998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.393025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.393289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.393554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.393580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.393902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.394124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.394150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.394393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.394659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.394707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.580 [2024-05-15 01:00:17.394858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.395088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.580 [2024-05-15 01:00:17.395138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.580 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.395424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.395697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.395722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.395986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.396289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.396335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.396622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.396893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.396952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.397220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.397531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.397581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.397807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.397986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.398014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.398293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.398569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.398621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.398891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.399152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.399205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.399485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.399778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.399803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.400014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.400197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.400224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.400526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.400804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.400853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.401119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.401412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.401459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.401603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.401881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.401941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.402207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.402460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.402506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.402788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.403083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.403134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.403337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.403617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.403670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.403891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.404116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.404142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.404412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.404666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.404693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.404945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.405254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.405303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.405530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.405777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.405824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.406098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.406330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.406378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.406641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.406960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.407003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.407303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.407597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.407649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.407858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.408106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.408160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.408372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.408626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.408651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.408898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.409203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.409252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.409520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.409812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.409865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.410134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.410325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.410351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.410587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.410870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.410921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.411064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.411361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.411412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.581 qpair failed and we were unable to recover it. 00:23:30.581 [2024-05-15 01:00:17.411663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.581 [2024-05-15 01:00:17.411875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.411900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.412137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.412366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.412418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.412641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.412905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.412965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.413228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.413536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.413584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.413863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.414130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.414157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.414399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.414668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.414715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.414874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.415018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.415046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.415304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.415521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.415548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.415829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.416130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.416180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.416406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.416692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.416745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.416983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.417219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.417244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.417477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.417761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.417808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.418114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.418403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.418458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.418607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.418871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.418924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.419139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.419334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.419387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.419657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.419918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.419987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.420254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.420539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.420585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.420795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.420996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.421023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.421293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.421530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.421587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.421821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.422121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.422169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.422378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.422634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.422681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.422943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.423197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.423246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.423505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.423747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.423802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.424059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.424335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.424382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.424665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.424926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.424985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.425206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.425468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.425520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.425843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.426175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.426223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.426378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.426651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.426700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.426978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.427134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.427162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.582 qpair failed and we were unable to recover it. 00:23:30.582 [2024-05-15 01:00:17.427431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.427698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.582 [2024-05-15 01:00:17.427749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.427983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.428133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.428159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.428429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.428728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.428782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.428980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.429255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.429304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.429533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.429778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.429831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.430134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.430449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.430501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.430703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.430925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.430955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.431182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.431435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.431487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.431765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.431996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.432023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.432295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.432471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.432499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.432752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.433030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.433084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.433312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.433518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.433545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.433764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.433945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.433971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.434202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.434508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.434557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.434713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.434980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.435006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.435213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.435456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.435507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.435812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.436046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.436133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.436349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.436618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.436670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.436906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.437154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.437205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.437391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.437660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.437717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.437982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.438233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.438290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.438522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.438698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.438724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.438916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.439160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.439186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.439475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.439737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.439787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.440028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.440315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.440367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.440521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.440793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.440844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.441122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.441393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.441441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.441710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.441969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.442019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.442152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.442376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.442427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.442645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.442911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.583 [2024-05-15 01:00:17.442996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.583 qpair failed and we were unable to recover it. 00:23:30.583 [2024-05-15 01:00:17.443196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.443452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.443477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.443715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.443889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.443914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.444112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.444359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.444411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.444556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.444797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.444855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.445087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.445255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.445297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.445476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.445690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.445731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.445885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.446101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.446143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.446331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.446511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.446552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.446724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.446905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.446953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.447123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.447305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.447346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.447518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.447683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.447710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.447857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.448023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.448068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.448263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.448444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.448484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.448627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.448828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.448868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.449047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.449234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.449262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.449462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.449623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.449662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.449851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.450028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.450069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.450222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.450427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.450452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.450671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.450913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.450970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.451211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.451444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.451496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.451682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.451984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.452040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.452186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.452329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.452360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.452533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.452698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.452724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.452877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.453042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.453069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.453231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.453385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.453411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.453561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.453737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.453763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.453941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.454127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.454153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.454296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.454458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.454483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.584 qpair failed and we were unable to recover it. 00:23:30.584 [2024-05-15 01:00:17.454645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.454789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.584 [2024-05-15 01:00:17.454815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.454954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.455130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.455156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.455299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.455477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.455504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.455677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.455826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.455858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.456037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.456185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.456212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.456349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.456517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.456544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.456681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.456824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.456850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.457003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.457159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.457185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.457356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.457526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.457552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.457706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.457880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.457905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.458108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.458273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.458301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.458480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.458644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.458671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.458810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.458965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.459006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.459182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.459446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.459505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.459740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.460022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.460066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.460253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.460447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.460501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.460657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.460880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.460915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.461140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.461332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.461382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.461566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.461759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.461810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.462048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.462293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.462345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.462551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.462740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.462791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.462960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.463186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.463235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.463439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.463627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.463671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.463862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.464059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.464105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.464294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.464498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.464541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.585 qpair failed and we were unable to recover it. 00:23:30.585 [2024-05-15 01:00:17.464694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.585 [2024-05-15 01:00:17.464869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.464912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.465114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.465310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.465350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.465515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.465708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.465753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.465950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.466154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.466195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.466482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.466637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.466679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.466853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.467022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.467066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.467238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.467401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.467442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.467731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.467924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.467977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.468155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.468347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.468387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.468578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.468883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.468908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.469103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.469324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.469372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.469560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.469736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.469762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.469955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.470123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.470151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.470289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.470481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.470523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.470702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.470909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.470958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.471130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.471304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.471346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.471479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.471654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.471693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.471863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.472026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.472071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.472259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.472453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.472496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.472639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.472802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.472846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.473028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.473157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.473184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.473383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.473554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.473595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.473792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.473952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.473997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.474138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.474400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.474444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.474618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.474799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.474842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.475021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.475211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.475258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.475422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.475572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.475599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.475798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.475974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.476019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.476172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.476318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.476343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.586 qpair failed and we were unable to recover it. 00:23:30.586 [2024-05-15 01:00:17.476504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.476717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.586 [2024-05-15 01:00:17.476762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.476924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.477141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.477186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.477341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.477514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.477539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.477709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.477896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.477946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.478127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.478313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.478353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.478519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.478695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.478737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.478904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.479097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.479123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.479288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.479470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.479512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.479713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.479887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.479940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.480214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.480421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.480454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.480630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.480789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.480814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.481006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.481163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.481190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.481381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.481658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.481700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.481891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.482096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.482141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.482315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.482481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.482524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.482695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.482889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.482941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.483121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.483287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.483328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.483495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.483706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.483751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.483900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.484095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.484121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.484264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.484448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.484493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.484628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.484916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.484957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.485149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.485356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.485389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.485533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.485732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.485757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.485907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.486145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.486192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.486360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.486512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.486538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.486700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.486900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.486953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.487125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.487433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.487459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.487610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.487817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.487860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.487999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.488287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.488313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.488483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.488630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.587 [2024-05-15 01:00:17.488657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.587 qpair failed and we were unable to recover it. 00:23:30.587 [2024-05-15 01:00:17.488891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.489129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.489177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.489316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.489448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.489473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.489633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.489807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.489849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.490031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.490200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.490241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.490369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.490525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.490566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.490727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.490882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.490909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.491072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.491271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.491301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.491505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.491665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.491707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.491862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.491993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.492019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.492211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.492398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.492422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.492571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.492765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.492793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.492969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.493146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.493186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.493338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.493505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.493545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.493698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.493836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.493861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.494003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.494179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.494209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.494420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.494588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.494629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.494772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.494913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.494943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.495126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.495337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.495377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.495553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.495750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.495778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.495945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.496146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.496171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.496346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.496514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.496558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.496741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.496887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.496927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.497096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.497280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.497319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.497445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.497653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.497692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.497850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.497999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.498026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.498176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.498367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.498405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.498570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.498734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.498760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.498908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.499089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.499116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.499337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.499513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.499539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.588 qpair failed and we were unable to recover it. 00:23:30.588 [2024-05-15 01:00:17.499701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.499862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.588 [2024-05-15 01:00:17.499900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.500050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.500233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.500275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.500428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.500619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.500646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.500786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.500938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.500965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.501130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.501286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.501324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.501484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.501627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.501654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.501839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.502018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.502058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.502207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.502365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.502403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.502553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.502712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.502749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.502956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.503141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.503179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.503345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.503511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.503536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.503671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.503815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.503853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.504026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.504157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.504182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.504309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.504442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.504467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.504624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.504783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.504808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.504942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.505105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.505129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.505268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.505396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.505421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.505548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.505696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.505722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.505853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.505996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.506022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.506151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.506294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.506319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.506475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.506608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.506634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.506789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.506918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.506953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.507118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.507276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.507301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.507432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.507558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.507583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.507711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.507840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.507867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.589 qpair failed and we were unable to recover it. 00:23:30.589 [2024-05-15 01:00:17.508041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.589 [2024-05-15 01:00:17.508174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.508199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.508377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.508507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.508531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.508691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.508820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.508844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.508977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.509132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.509157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.509292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.509433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.509458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.509601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.509727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.509751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.509913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.510057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.510081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.510214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.510364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.510388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.510539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.510688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.510713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.510847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.510974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.511000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.511169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.511298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.511325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.511462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.511614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.511639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.511769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.511910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.511939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.512080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.512213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.512240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.512371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.512531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.512556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.512693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.512823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.512848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.512983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.513121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.513145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.513284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.513421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.513446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.513598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.513744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.513768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.513909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.514060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.514086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.514224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.514373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.514402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.514552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.514686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.514711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.514874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.515001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.515026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.515185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.515331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.515355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.515489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.515615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.515641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.515815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.515948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.515973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.516106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.516262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.516286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.516412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.516544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.516573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.516708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.516835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.516859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.590 qpair failed and we were unable to recover it. 00:23:30.590 [2024-05-15 01:00:17.517022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.517176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.590 [2024-05-15 01:00:17.517202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.517331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.517457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.517482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.517628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.517765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.517790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.517945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.518080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.518104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.518241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.518377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.518403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.518541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.518670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.518694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.518826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.518980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.519007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.519289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.519423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.519449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.519601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.519737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.519764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.519902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.520074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.520100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.520232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.520356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.520382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.520515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.520776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.520802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.520968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.521142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.521167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.521307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.521442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.521470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.521616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.521781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.521806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.521972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.522134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.522159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.522427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.522558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.522584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.522730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.522889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.522915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.523055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.523209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.523234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.523393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.523527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.523554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.523684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.523815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.523842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.523978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.524108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.524134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.524271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.524396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.524421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.524567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.524708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.524733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.524866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.524999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.525025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.591 qpair failed and we were unable to recover it. 00:23:30.591 [2024-05-15 01:00:17.525162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.591 [2024-05-15 01:00:17.525428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.525453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.525589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.525736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.525761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.525928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.526075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.526107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.526246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.526413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.526438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.526574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.526701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.526726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.526863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.527025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.527051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.527206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.527337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.527363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.527496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.527621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.527646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.527793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.527929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.527959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.528086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.528231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.528256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.528386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.528526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.528551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.528680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.528807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.528832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.528967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.529121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.529146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.529291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.529426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.529451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.529700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.529886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.529913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.530065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.530201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.530227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.530365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.530494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.530519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.530664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.530788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.530813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.530979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.531114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.531140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.531276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.531433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.531458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.531669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.531825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.531851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.531988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 4087922 Killed "${NVMF_APP[@]}" "$@" 00:23:30.592 [2024-05-15 01:00:17.532152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.532188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.532346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.532516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.532552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.592 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 [2024-05-15 01:00:17.532709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:30.592 [2024-05-15 01:00:17.532856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 [2024-05-15 01:00:17.532881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.592 qpair failed and we were unable to recover it. 00:23:30.592 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.592 [2024-05-15 01:00:17.533022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.592 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:30.593 [2024-05-15 01:00:17.533151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.533176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.593 [2024-05-15 01:00:17.533325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.533476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.533501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.533640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.533773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.533799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.533937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.534073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.534099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.534234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.534374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.534398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.534540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.534669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.534693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.534837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.534971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.535007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.535155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.535288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.535313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.535479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.535599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.535628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.535761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.535926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.535962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.536107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.536248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.536273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.536405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.536541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.536566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.536710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.536835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.536859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.536993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.537124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.537148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.537307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.537462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.537486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.537615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4088348 00:23:30.593 [2024-05-15 01:00:17.537740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.537766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:30.593 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4088348 00:23:30.593 [2024-05-15 01:00:17.537903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.538050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.538078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 wit 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4088348 ']' 00:23:30.593 h addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.538226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.593 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.593 [2024-05-15 01:00:17.538385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.538411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.593 [2024-05-15 01:00:17.538541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.593 [2024-05-15 01:00:17.538702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.538728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.593 [2024-05-15 01:00:17.538866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.539000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.539026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.539178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.539302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.539328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.539458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.539602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.539628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.539756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.539887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.539911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.540056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.540183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.540209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.540342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.540473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.540497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.540653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.540790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.540816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.540954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.541086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.541111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.541242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.541400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.541425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.541569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.541703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.541728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.541874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.542015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.542041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.593 qpair failed and we were unable to recover it. 00:23:30.593 [2024-05-15 01:00:17.542186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.593 [2024-05-15 01:00:17.542317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.542344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.542490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.542623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.542648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.542782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.542908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.542941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.543090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.543226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.543252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.543410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.543543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.543569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.543704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.543861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.543893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.544032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.544164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.544190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.544323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.544454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.544479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.544637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.544764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.544788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.544957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.545148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.545173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.545308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.545433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.545458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.545611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.545775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.545800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.545941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.546067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.546092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.546247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.546371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.546396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.546543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.546672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.546698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.546828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.546989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.547016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.547156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.547289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.547315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.547446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.547597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.547623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.547759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.547917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.547960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.548121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.548257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.548284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.548413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.548535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.548561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.548694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.548825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.548851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.549010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.549166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.549192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.549324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.549455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.549480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.549609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.549769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.549794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.549949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.550077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.550102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.594 qpair failed and we were unable to recover it. 00:23:30.594 [2024-05-15 01:00:17.550267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.594 [2024-05-15 01:00:17.550400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.550426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.550591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.550729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.550754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.550887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.551050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.551077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.551241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.551370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.551395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.551524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.551678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.551704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.551835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.551965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.551991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.552129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.552265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.552292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.552426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.552565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.552592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.552747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.552880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.552905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.553053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.553184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.553209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.553353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.553497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.553525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.553691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.553827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.553853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.553996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.554139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.554166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.554312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.554445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.554472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.554680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.554812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.554837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.554981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.555118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.555145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.555280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.555437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.555462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.555589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.555715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.555740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.555870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.556008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.556036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.556241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.556366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.556392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.556520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.556655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.556682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.556824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.556956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.556982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.557115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.557272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.557298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.557432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.557567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.557592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.557725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.557846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.557871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.558005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.558141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.558167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.558298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.558431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.558459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.558624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.558749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.558774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.558902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.559047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.559073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.559212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.559343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.559370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.559503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.559636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.559662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.559792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.559924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.559960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.560094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.560220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.560246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.560404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.560565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.560592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.560737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.560868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.560893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.561051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.561178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.561203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.561331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.561463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.561489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.561614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.561770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.561795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.561954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.562119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.562143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.562273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.562403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.562427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.562554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.562685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.595 [2024-05-15 01:00:17.562715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.595 qpair failed and we were unable to recover it. 00:23:30.595 [2024-05-15 01:00:17.562875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.563007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.563033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.563181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.563315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.563340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.563463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.563599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.563626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.563788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.563916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.563947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.564106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.564238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.564263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.564397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.564529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.564553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.564689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.564817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.564841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.564978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.565108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.565132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.565287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.565408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.565432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.565561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.565690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.565719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.565886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.566025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.566051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.566188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.566347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.566373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.566514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.566671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.566695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.566829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.566972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.566997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.567130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.567261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.567287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.567419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.567545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.567570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.567696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.567825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.567850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.567983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.568110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.568135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.568262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.568392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.568418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.568551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.568683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.568708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.568846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.568969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.568995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.569129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.569260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.569284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.569419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.569551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.569578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.569713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.569834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.569859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.570021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.570181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.570205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.570373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.570504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.570528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.570658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.570803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.570829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.570958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.571088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.571113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.571280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.571439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.571464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.571595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.571726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.571751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.571888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.572054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.572079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.572213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.572339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.572364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.572501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.572636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.572661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.572794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.572927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.572957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.573086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.573242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.573267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.573434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.573571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.573597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.573750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.573878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.573902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.574041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.574202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.574226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.574352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.574487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.574514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.574644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.574799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.574824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.574951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.575089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.596 [2024-05-15 01:00:17.575114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.596 qpair failed and we were unable to recover it. 00:23:30.596 [2024-05-15 01:00:17.575249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.575375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.575399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.575532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.575658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.575682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.575815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.575955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.575981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.576116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.576244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.576268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.576407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.576528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.576553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.576686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.576814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.576838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.576978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.577105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.577130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.577262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.577418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.577442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.577574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.577698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.577722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.577887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.578050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.578078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.578206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.578338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.578362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.578498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.578635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.578661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.578790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.578922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.578956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.579089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.579212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.579236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.579393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.579529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.579554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.579688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.579823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.579848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.579990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.580125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.580150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.580287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.580448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.580474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.580608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.580735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.580761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.580921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.581069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.581097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.581235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.581366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.581390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.581548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.581703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.581727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.581873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.582005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.582031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.582192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.582353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.582377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.582511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.582647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.582673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.582800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.582927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.582958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.583093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.583228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.583252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.583397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.583524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.583549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.583709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.583834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.583859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.583997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.584128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.584153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.584290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.584414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.584437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.597 qpair failed and we were unable to recover it. 00:23:30.597 [2024-05-15 01:00:17.584596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.597 [2024-05-15 01:00:17.584723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.584748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.584888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.585806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.585838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.586009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.586144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.586170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.586307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.586445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.586471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.586605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.586733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.586759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.586910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.587045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.587071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.587208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.587341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.587368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.587450] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:30.598 [2024-05-15 01:00:17.587505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.587544] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.598 [2024-05-15 01:00:17.587662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.587687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.587831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.587969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.587996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.588140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.588271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.588297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.588428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.588591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.588617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.588750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.588888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.588915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.589082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.589214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.589241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.589373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.589530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.589556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.589688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.589810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.589835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.589974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.590110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.590137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.590277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.590417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.590443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.590579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.590709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.590735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.590905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.591045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.591072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.591219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.591347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.591372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.591505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.591654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.591679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.591817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.591955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.591983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.592123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.592255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.592282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.592418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.592550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.592576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.592735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.592871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.592897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.593040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.593173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.593201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.593339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.593470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.593495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.593630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.593763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.593789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.593917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.594064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.594090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.594216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.594344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.594370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.594506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.594635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.598 [2024-05-15 01:00:17.594661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.598 qpair failed and we were unable to recover it. 00:23:30.598 [2024-05-15 01:00:17.594821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.594963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.594991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.595128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.595292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.595318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.595452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.595581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.595606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.595737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.595865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.595891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.596038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.596237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.596263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.596394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.596530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.596556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.596757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.596886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.596912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.597058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.597190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.597216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.597354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.597485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.597511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.597651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.597788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.597814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.597954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.598087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.598113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.598257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.598381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.598407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.598544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.598704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.598730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.598874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.599025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.599052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.599189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.599318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.599343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.599478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.599618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.599644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.599772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.599915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.599 [2024-05-15 01:00:17.599946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.599 qpair failed and we were unable to recover it. 00:23:30.599 [2024-05-15 01:00:17.600087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.600240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.600266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.600404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.600536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.600562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.600719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.600854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.600882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.601020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.601154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.601181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.601317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.601453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.601478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.601609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.601740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.601766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.601929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.602075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.602102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.602237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.602374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.602402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.602533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.602663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.602690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.602828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.602960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.602987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.603121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.603260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.603286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.603426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.604609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.604642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.604858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.604999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.605026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.605158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.605317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.605343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.605483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.605632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.605658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.605795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.605940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.605968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.600 [2024-05-15 01:00:17.606103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.606234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.600 [2024-05-15 01:00:17.606259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.600 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.606421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.606550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.606578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.606714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.606871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.606898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.607081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.607210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.607236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.607366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.607519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.607548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.607688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.607818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.607844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.607999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.608137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.608163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.608300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.608435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.608461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.608603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.608763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.608793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.608928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.609183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.609211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.609343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.609476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.609502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.609649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.609776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.609802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.609939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.610084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.610110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.610269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.610398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.610423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.610567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.610697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.610727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.610894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.611033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.611060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.601 [2024-05-15 01:00:17.611204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.611363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.601 [2024-05-15 01:00:17.611388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.601 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.611528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.611662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.611688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.611821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.611962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.611990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.612123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.612286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.612312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.612449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.612582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.612608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.612743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.612876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.612901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.613068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.613202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.613227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.613365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.613503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.613531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.613675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.613809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.613840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.613997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.614130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.614157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.614280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.614413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.614441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.614598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.614742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.614771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.614921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.615136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.615163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.615327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.615460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.615486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.615632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.615773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.615800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.615954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.616085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.616111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.602 qpair failed and we were unable to recover it. 00:23:30.602 [2024-05-15 01:00:17.616248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.602 [2024-05-15 01:00:17.616384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.616412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.603 qpair failed and we were unable to recover it. 00:23:30.603 [2024-05-15 01:00:17.616551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.616676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.616701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.603 qpair failed and we were unable to recover it. 00:23:30.603 [2024-05-15 01:00:17.616839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.616992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.617024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.603 qpair failed and we were unable to recover it. 00:23:30.603 [2024-05-15 01:00:17.617176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.617321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.617349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.603 qpair failed and we were unable to recover it. 00:23:30.603 [2024-05-15 01:00:17.617490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.617631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.617658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.603 qpair failed and we were unable to recover it. 00:23:30.603 [2024-05-15 01:00:17.617806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.617942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.617969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.603 qpair failed and we were unable to recover it. 00:23:30.603 [2024-05-15 01:00:17.618105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.618239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.618264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.603 qpair failed and we were unable to recover it. 00:23:30.603 [2024-05-15 01:00:17.618407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.618573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.618601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.603 qpair failed and we were unable to recover it. 00:23:30.603 [2024-05-15 01:00:17.618742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.618884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.603 [2024-05-15 01:00:17.618922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.603 qpair failed and we were unable to recover it. 00:23:30.603 [2024-05-15 01:00:17.619105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.619282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.619310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.619448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.619593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.619630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.619809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.619969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.620005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.620161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.620317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.620353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.620515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.620683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.620719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.620875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.621077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.621113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.621268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.621416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.621451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.621618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.621767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.621792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.621930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.622070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.622095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.622244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.622369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.622394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.878 [2024-05-15 01:00:17.622529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.622656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.622682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.622826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.622959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.622986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.623122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.623255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.623282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.623416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.623569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.623600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.623751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.623885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.623911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.624049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.624185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.624211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.878 qpair failed and we were unable to recover it. 00:23:30.878 [2024-05-15 01:00:17.624370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.624505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.878 [2024-05-15 01:00:17.624532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.624664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.624809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.624834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.624979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.625106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.625132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.625264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.625397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.625423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.625560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.625695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.625721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.625862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.626001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.626028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.626168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.626347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.626372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.626514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.627308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.627338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.627492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.627627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.627652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.627786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.627913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.627947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.628118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.628253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.628280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.629007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.629145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.629171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.629305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.629437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.629465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.629606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.629733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.629758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.629898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.630037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.630064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.630210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.630373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.630399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.630534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.630664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.630690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.630833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.630991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.631018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.631160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.631303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.631329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.631470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.631620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.631648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.631815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.631948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.631974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.632111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.632265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.879 [2024-05-15 01:00:17.632290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.879 qpair failed and we were unable to recover it. 00:23:30.879 [2024-05-15 01:00:17.632440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.632580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.632608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.632750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.632889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.632916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.633081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.633229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.633254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.633387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.633517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.633542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.633681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.633823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.633850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.634007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.634142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.634170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.634307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.634447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.634473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.634611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.634741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.634768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.634900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.635046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.635071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.635216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.635347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.635372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.635508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.635652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.635677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.635815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.635943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.635969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.636104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.636247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.636274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.636408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.636541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.636568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.636700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.636829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.636855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.636993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.637128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.637155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.637306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.637473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.637499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.637634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.637775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.637801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.637953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.638088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.638113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.638246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.638370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.638397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.880 qpair failed and we were unable to recover it. 00:23:30.880 [2024-05-15 01:00:17.638533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.638663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.880 [2024-05-15 01:00:17.638688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.638829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.638971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.638997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.639132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.639261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.639286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.639454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.639597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.639623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.639755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.639898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.639924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.640070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.640234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.640262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.640406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.640534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.640561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.640700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.640840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.640865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.641011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.641143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.641169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.641327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.641465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.641491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.641622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.641755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.641782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.641972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.642109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.642135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.642264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.642393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.642419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.642568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.642699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.642726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.642858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.642992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.643018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.643169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.643306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.643332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.643467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.643603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.643628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.643772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.643907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.643940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.644087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.644231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.644256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.644394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.644535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.644562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.644698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.644829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.644856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.881 [2024-05-15 01:00:17.644992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.645155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.881 [2024-05-15 01:00:17.645180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.881 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.645316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.645443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.645468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.645608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.645757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.645784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.645920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.646050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.646075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.646201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.646346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.646371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.646541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.646694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.646723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.646879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.647038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.647067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.647206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.647353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.647380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.647516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.647645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.647671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.647808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.647946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.647973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.648114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.648246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.648271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.648405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.648550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.648577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.648731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.648863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.648889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.649038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.649177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.649206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.649337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.649512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.649537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.649692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.649841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.649870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.650036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.650169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.650195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.650331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.650467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.650494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.650647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.650799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.650826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.650978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.651118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.651146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.651285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.651439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.651468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.882 qpair failed and we were unable to recover it. 00:23:30.882 [2024-05-15 01:00:17.651608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.882 [2024-05-15 01:00:17.651746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.651772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.651905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.652046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.652081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.652232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.652386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.652421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ea4000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.652570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.652706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.652732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.652876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.653015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.653044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.653200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.653374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.653399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.653544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.653680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.653706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.653842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.653975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.654001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.654177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.654309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.654335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.654464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.654588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.654613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.654749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.654872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.654897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.655036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.655172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.655198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.655332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.655506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.655533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.655690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.655841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.655867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.655994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.656134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.656162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.656341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.656485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.656512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.656646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.656787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.656813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.656976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.657102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.883 [2024-05-15 01:00:17.657119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.657144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.657320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.657448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.657474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.657611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.657741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.657768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.883 [2024-05-15 01:00:17.657909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.658049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.883 [2024-05-15 01:00:17.658077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.883 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.658221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.658353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.658378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.658545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.658681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.658706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.658840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.658967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.658993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.659177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.659311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.659336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.659474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.659610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.659638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.659817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.659952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.659979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.660145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.660277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.660305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.660442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.660573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.660599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.660738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.660945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.660978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.661119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.661254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.661279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.661410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.661547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.661573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.661706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.661858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.661885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.662089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.662229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.662258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.662430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.662582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.662608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.662741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.662875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.662901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.884 qpair failed and we were unable to recover it. 00:23:30.884 [2024-05-15 01:00:17.663048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.884 [2024-05-15 01:00:17.663199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.663225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.663420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.663578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.663604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.663740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.663866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.663891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.664035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.664190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.664218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.664354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.664502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.664528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.664666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.664800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.664825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.664969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.665119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.665147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.665297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.665438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.665465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.665653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.665807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.665833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.666004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.666138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.666165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.666313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.666478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.666504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.666639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.666785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.666810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.666946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.667114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.667139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.667283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.667427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.667452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.667586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.667711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.667737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.667876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.668017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.668045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.668250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.668399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.668425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.668568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.668704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.668731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.668871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.669026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.669056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.669197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.669334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.669361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.669505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.669635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.669662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.669805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.669955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.669982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.885 qpair failed and we were unable to recover it. 00:23:30.885 [2024-05-15 01:00:17.670120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.885 [2024-05-15 01:00:17.670259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.670286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.670467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.670612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.670639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.670772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.670901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.670927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.671077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.671213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.671243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.671403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.671561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.671587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.671724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.671870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.671895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.672038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.672180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.672207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.672347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.672508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.672535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.672667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.672818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.672846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.672982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.673141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.673167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.673302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.673453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.673480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.673617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.673781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.673807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.673973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.674126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.674152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.674284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.674412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.674437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.674586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.674711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.674737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.674873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.675005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.675033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.675183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.675318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.675350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.675520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.675697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.675722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.675859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.675998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.676024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.676189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.676325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.676350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.676494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.676628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.676655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.886 qpair failed and we were unable to recover it. 00:23:30.886 [2024-05-15 01:00:17.676814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.886 [2024-05-15 01:00:17.676978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.677006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.677137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.677270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.677295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.677462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.677614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.677640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.677778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.677927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.677966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.678100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.678252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.678280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.678433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.678564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.678594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.678772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.678906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.678958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.679112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.679238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.679263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.679397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.679537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.679562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.679695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.679832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.679858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.680026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.680187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.680213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.680348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.680477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.680502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.680662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.680794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.680819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.680966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.681102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.681129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.681279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.681411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.681437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.681584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.681709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.681738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.681891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.682039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.682066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.682205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.682337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.682364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.682499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.682624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.682650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.682788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.682945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.682971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.683105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.683238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.683264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.683400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.683551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.683579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.683736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.683866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.683892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.684038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.684175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.684200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.684332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.684487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.684512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.684648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.684793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.684823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.684971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.685108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.685134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.887 qpair failed and we were unable to recover it. 00:23:30.887 [2024-05-15 01:00:17.685265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.887 [2024-05-15 01:00:17.685402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.685428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.685563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.685712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.685738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.685888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.686034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.686062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.686194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.686322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.686349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.686500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.686636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.686663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.686816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.686958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.686986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.687121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.687275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.687302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.687438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.687589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.687617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.687771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.687904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.687937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.688080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.688235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.688261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.688397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.688528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.688555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.688694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.688825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.688852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.688979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.689125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.689152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.689288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.689422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.689449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.689583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.689723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.689748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.689880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.690019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.690046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.690190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.690318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.690343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.690474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.690610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.690635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.690785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.690921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.690958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.691099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.691233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.691261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.691392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.691520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.691546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.691681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.691811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.691837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.691976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.692111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.692137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.692263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.692409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.692435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.692566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.692698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.692724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.692859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.692996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.693022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.693187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.693337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.693362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.693516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.693645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.693672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.693805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.693942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.693976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.694119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.694263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.694289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.694425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.694577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.694603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.694740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.694874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.888 [2024-05-15 01:00:17.694902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.888 qpair failed and we were unable to recover it. 00:23:30.888 [2024-05-15 01:00:17.695058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.695191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.695216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.695363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.695506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.695532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.695670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.695801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.695827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.695990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.696127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.696156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.696289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.696423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.696449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.696582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.696711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.696737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.696868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.696991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.697018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.697170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.697301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.697328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.697478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.697634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.697660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.697787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.697942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.697974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.698106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.698239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.698265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.698398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.698546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.698572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.698704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.698853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.698880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.699039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.699187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.699214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.699393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.699531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.699559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.699703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.699834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.699861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.700000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.700147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.700172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.700312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.700458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.700484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.700621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.700749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.700774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.700938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.701088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.701114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.701265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.701397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.701422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.701562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.701696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.701723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.701856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.701994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.702022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.702150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.702280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.702305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.702460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.702609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.702635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.702767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.702908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.889 [2024-05-15 01:00:17.702942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.889 qpair failed and we were unable to recover it. 00:23:30.889 [2024-05-15 01:00:17.703083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.703217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.703244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.703383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.703513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.703539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.703681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.703811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.703838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.703979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.704107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.704132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.704280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.704407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.704433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.704566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.704710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.704735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.704881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.705025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.705051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.705186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.705327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.705353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.705490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.705634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.705662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.705809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.705945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.705977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.706119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.706274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.706300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.706454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.706585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.706611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.706752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.706893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.706920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.707072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.707208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.707233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.707371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.707492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.707517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.707651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.707780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.707806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.707949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.708081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.708107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.708237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.708385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.708410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.708542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.708666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.708691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.708825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.708968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.708997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.709131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.709277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.709302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.709429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.709593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.709618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.709768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.709899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.709923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.710061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.710201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.710226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.710368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.710512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.710538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.710663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.710793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.710820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.710956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.711086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.711113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.711243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.711384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.711408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.711560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.711688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.711713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.711851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.711984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.712012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.712165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.712297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.712323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.890 qpair failed and we were unable to recover it. 00:23:30.890 [2024-05-15 01:00:17.712482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.712612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.890 [2024-05-15 01:00:17.712641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.712782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.712913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.712942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.713072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.713201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.713227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.713371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.713502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.713526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.713656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.713817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.713842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.713979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.714108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.714134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.714267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.714400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.714425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.714565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.714707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.714731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.714860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.714998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.715023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.715170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.715318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.715343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.715486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.715642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.715671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.715823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.715952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.715978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.716128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.716256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.716280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.716411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.716533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.716558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.716699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.716846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.716870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.717044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.717177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.717203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.717347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.717480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.717507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.717648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.717783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.717810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.717951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.718086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.718112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.718250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.718388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.718412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.718545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.718684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.718708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.718911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.719066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.719091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.719220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.719354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.719379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.719517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.719658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.719683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.719814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.719951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.719978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.720109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.720243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.720268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.720420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.720550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.720575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.720736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.720866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.720891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.721029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.721168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.721194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.721339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.721488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.721513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.721648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.721789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.721813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.721960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.722097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.722123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.891 qpair failed and we were unable to recover it. 00:23:30.891 [2024-05-15 01:00:17.722256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.722454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.891 [2024-05-15 01:00:17.722479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.722606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.722740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.722766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.722936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.723081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.723105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.723260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.723392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.723417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.723549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.723689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.723714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.723862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.724012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.724038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.724181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.724316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.724341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.724490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.724629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.724656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.724791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.724923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.724953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.725103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.725241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.725266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.725397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.725529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.725554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.725683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.725821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.725845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.725988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.726123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.726147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.726285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.726409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.726435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.726578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.726710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.726736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.726874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.727032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.727058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.727200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.727394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.727418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.727563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.727691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.727716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.727858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.727997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.728024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.728159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.728292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.728318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.728448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.728583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.728608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.728752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.728879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.728903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.729052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.729187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.729213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.729355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.729487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.729511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.729660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.729808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.729832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.729970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.730107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.730133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.730269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.730396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.730421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.730587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.730715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.730739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.730896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.731030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.731054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.731191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.731320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.731349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.731487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.731613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.731639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.731772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.731896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.731921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.892 qpair failed and we were unable to recover it. 00:23:30.892 [2024-05-15 01:00:17.732067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.892 [2024-05-15 01:00:17.732193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.732218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.732346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.732473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.732498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.732627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.732765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.732792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.732930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.733068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.733094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.733228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.733363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.733390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.733525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.733653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.733680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.733811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.733942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.733968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.734126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.734253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.734279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.734411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.734558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.734583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.734713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.734845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.734871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.735011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.735147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.735171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.735319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.735477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.735502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.735631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.735762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.735786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.735952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.736089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.736116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.736265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.736396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.736421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.736554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.736689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.736714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.736841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.736970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.736995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.737128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.737260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.737284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.737416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.737559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.737584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.737756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.737887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.737912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.738049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.738178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.738202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.738335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.738482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.738507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.738638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.738762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.738787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.738941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.739071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.739099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.739226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.739360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.739386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.739523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.739654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.739679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.739808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.739943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.739968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.893 [2024-05-15 01:00:17.740100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.740227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.893 [2024-05-15 01:00:17.740252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.893 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.740386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.740524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.740549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.740677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.740820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.740845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.740969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.741107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.741132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.741263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.741397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.741422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.741572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.741701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.741725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.741888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.742025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.742049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.742189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.742319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.742343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.742489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.742613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.742638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.742782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.742912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.742973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.743136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.743273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.743299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.743431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.743561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.743586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.743749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.743877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.743901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.744060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.744188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.744213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.744337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.744492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.744516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.744651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.744784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.744810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.744949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.745110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.745137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.745270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.745417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.745441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.745580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.745708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.745734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.745869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.746006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.746034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.746170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.746305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.746330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.746458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.746584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.746613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.746747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.746881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.746905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.747040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.747169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.747193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.747344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.747504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.747528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.747686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.747847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.747872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.748007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.748136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.748161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.748308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.748450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.748474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.748621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.748749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.748776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.748943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.749075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.749100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.749237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.749378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.749403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.749537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.749680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.749705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.894 qpair failed and we were unable to recover it. 00:23:30.894 [2024-05-15 01:00:17.749861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.749997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.894 [2024-05-15 01:00:17.750023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.750159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.750302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.750326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.750468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.750597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.750622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.750749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.750878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.750903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.751062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.751226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.751251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.751384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.751512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.751537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.751677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.751825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.751850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.751997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.752127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.752152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.752294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.752418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.752443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.752589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.752744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.752769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.752923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.753058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.753082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.753217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.753346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.753371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.753522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.753652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.753676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.753802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.753927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.753958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.754099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.754228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.754253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.754384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.754508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.754532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.754671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.754802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.754826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.754966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.755111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.755135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.755291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.755420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.755446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.755601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.755732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.755758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.755889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.756036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.756061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.756188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.756317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.756342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.756470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.756595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.756620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.756749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.756891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.756915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.757048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.757169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.757195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.757326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.757470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.757495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.757624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.757749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.757775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.757908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.758051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.758078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.758214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.758343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.758367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.758500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.758645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.758669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.758794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.758924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.758974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.759112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.759255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.759283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.895 qpair failed and we were unable to recover it. 00:23:30.895 [2024-05-15 01:00:17.759444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.895 [2024-05-15 01:00:17.759566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.759591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.759727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.759850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.759874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.760008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.760143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.760167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.760301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.760432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.760456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.760597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.760739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.760765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.760896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.761034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.761059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.761190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.761341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.761366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.761499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.761633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.761657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.761791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.761917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.761952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.762089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.762221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.762247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.762380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.762522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.762547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.762699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.762844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.762868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.763006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.763152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.763176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.763301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.763428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.763452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.763578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.763705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.763731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.763863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.764001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.764027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.764185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.764312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.764337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.764471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.764597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.764622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.764758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.764903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.764928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.765073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.765202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.765226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.765352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.765480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.765504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.765640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.765782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.765807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.765947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.766081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.766107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.766235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.766367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.766392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.766519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.766644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.766669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.766794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.766923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.766964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.767095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.767229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.767254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.767387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.767545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.767569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.767703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.767833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.767857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.767998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.768124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.768149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.768301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.768431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.768456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.768614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.768745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.768772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.768905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.769043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.769070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.769219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.769348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.769374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.896 qpair failed and we were unable to recover it. 00:23:30.896 [2024-05-15 01:00:17.769535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.896 [2024-05-15 01:00:17.769664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.769688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.769822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.769975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.770000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.770167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.770295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.770319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.770453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.770599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.770624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.770751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.770883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.770910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.771067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.771241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.771269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.771412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.771573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.771599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.771753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.771886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.771911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.772049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.772206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.772234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.772364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.772518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.772544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.772679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.772820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.772846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.772983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.773114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.773139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.773300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.773430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.773454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.773592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.773723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.773749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.773885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.774021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.774047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.774180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.774313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.774338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.774461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.774589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.774614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.774747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.774880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.774906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.775060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.775210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.775235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.775370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.775507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.775534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.775698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.775833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.775859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.775998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.776147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.776173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.776302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.776453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.776478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.776601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.776733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.776758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.776893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.777028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.777055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.777182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.777336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.777362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.777489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.777650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.777676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.777811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.777950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.777977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.778113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.778243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.778267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.897 qpair failed and we were unable to recover it. 00:23:30.897 [2024-05-15 01:00:17.778406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.897 [2024-05-15 01:00:17.778529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.778553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.778682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.778815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.778839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.778974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.779107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.779134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.779271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.779405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.779432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.779596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.779759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.779783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.779911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.780073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.780099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.780226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.780360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.780392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.780535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.780657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.780682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.780817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.780950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.780976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.781108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.781250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.781278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.781422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.781552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.781577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.781710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.781841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.781866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.782004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.782013] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.898 [2024-05-15 01:00:17.782047] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.898 [2024-05-15 01:00:17.782064] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.898 [2024-05-15 01:00:17.782089] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.898 [2024-05-15 01:00:17.782108] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.898 [2024-05-15 01:00:17.782168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.782192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.782323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.782457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.782429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:30.898 [2024-05-15 01:00:17.782484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.782523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:30.898 [2024-05-15 01:00:17.782613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.782748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.782775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 wit[2024-05-15 01:00:17.782731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:23:30.898 h addr=10.0.0.2, port=4420 00:23:30.898 [2024-05-15 01:00:17.782740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.782921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.783056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.783081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.783241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.783375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.783401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.783547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.783686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.783711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.783841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.784037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.784063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.784212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.784341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.784367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.784505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.784698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.784723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.784873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.785001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.785027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.785158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.785300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.785326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.785468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.785602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.785628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.785758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.785890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.785921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.786060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.786210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.786235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.786376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.786535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.786560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.786696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.786840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.786865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.787008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.787140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.787166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.898 qpair failed and we were unable to recover it. 00:23:30.898 [2024-05-15 01:00:17.787327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.898 [2024-05-15 01:00:17.787452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.787477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.787618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.787808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.787833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.787982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.788121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.788148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.788298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.788437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.788463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.788598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.788738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.788768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.788904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.789053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.789079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.789224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.789364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.789391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.789524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.789660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.789685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.789817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.789964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.789990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.790124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.790265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.790291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.790419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.790553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.790579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.790709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.790841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.790866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.791014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.791154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.791181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.791330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.791487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.791513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.791650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.791786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.791811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.791956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.792089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.792114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.792256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.792418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.792446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.792584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.792717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.792742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.792872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.793002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.793028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.793176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.793322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.793347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.793492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.793623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.793648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.793783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.793945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.793972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.794124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.794274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.794300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.794440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.794582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.794607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.794740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.794885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.899 [2024-05-15 01:00:17.794912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.899 qpair failed and we were unable to recover it. 00:23:30.899 [2024-05-15 01:00:17.795065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.795203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.795228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.795379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.795524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.795549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.795700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.795830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.795855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.796048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.796191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.796216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.796352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.796493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.796520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.796712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.796847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.796872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.797020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.797156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.797183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.797324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.797454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.797482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.797636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.797784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.797809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.797952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.798093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.798119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.798256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.798392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.798419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.798557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.798720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.798746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.798878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.799026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.799053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.799203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.799340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.799365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.799499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.799649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.799674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.799812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.799949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.799977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.800114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.800242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.800267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.800417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.800551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.800576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.800707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.800897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.800922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.801068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.801207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.801232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.801388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.801544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.801569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.801700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.801835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.801868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.802020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.802152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.802177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.802343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.802486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.802512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.900 [2024-05-15 01:00:17.802649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.802791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.900 [2024-05-15 01:00:17.802816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.900 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.802956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.803100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.803125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.803268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.803404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.803431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.803573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.803712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.803737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.803885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.804046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.804073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.804210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.804342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.804368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.804501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.804630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.804655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.804785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.804924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.804959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.805101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.805233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.805259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.805456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.805592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.805618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.805756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.805888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.805915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.806059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.806206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.806232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.806363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.806502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.806529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.806686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.806819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.806845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.806995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.807128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.807153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.807326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.807452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.807477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.807615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.807772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.807796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.807952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.808098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.808122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.808289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.808422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.808447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.808586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.808722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.808747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.808895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.809046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.809073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.809220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.809348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.809374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.809508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.809645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.809671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.809816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.809954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.809983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.810121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.810310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.810335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.901 qpair failed and we were unable to recover it. 00:23:30.901 [2024-05-15 01:00:17.810475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.810609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.901 [2024-05-15 01:00:17.810635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.810769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.810897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.810922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.811060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.811207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.811232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.811377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.811505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.811530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.811671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.811808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.811833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.811983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.812130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.812155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.812310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.812449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.812476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.812615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.812751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.812776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.812908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.813058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.813085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.813236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.813374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.813400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.813540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.813674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.813699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.813834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.813979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.814005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.814139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.814278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.814303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.814499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.814653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.814678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.814818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.814958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.814983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.815131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.815267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.815292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.815430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.815577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.815603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.815754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.815889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.815916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.816070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.816203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.816228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.816373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.816501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.816526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.902 [2024-05-15 01:00:17.816708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.816852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.902 [2024-05-15 01:00:17.816877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.902 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.817014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.817149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.817174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.817313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.817443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.817468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.817606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.817759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.817786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.817920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.818057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.818082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.818227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.818362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.818388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.818534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.818684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.818709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.818837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.819011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.819039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.819178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.819310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.819335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.819468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.819595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.819619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.819751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.819889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.819914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.904 qpair failed and we were unable to recover it. 00:23:30.904 [2024-05-15 01:00:17.820062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.904 [2024-05-15 01:00:17.820216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.820241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.820369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.820517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.820542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.820682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.820813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.820843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.820993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.821126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.821151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.821302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.821441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.821468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.821616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.821749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.821776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.821908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.822044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.822070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.822216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.822359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.822384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.822513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.822644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.822669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.822816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.822955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.822982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.823127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.823264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.823289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.823424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.823586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.823611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.823748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.823886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.823911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.824071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.824208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.824233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.824386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.824513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.824537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.824685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.824847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.824872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.825009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.825145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.825172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.825301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.825449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.825474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.825616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.825750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.825776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.825907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.826054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.826080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.826214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.826343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.826368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.826505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.826634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.826659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.826793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.826974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.827000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.827153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.827291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.827316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.827448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.827586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.827612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.827745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.827909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.827940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.828080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.828217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.828242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.828379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.828501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.828526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.828677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.828837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.828861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.905 qpair failed and we were unable to recover it. 00:23:30.905 [2024-05-15 01:00:17.828997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.905 [2024-05-15 01:00:17.829143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.829168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.829305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.829436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.829461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.829595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.829726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.829753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.829886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.830020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.830046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.830196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.830375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.830403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.830554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.830688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.830714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.830867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.831003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.831031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.831173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.831348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.831376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.831511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.831647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.831673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.831813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.831954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.831982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.832118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.832256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.832282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.832450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.832604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.832638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.832815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.832968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.833000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.833173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.833317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.833348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.833513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.833688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.833718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.833901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.834072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.834103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7eac000b90 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.834326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.834454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.834480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.834632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.834794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.834819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.834961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.835165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.835190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.835328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.835478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.835505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.835654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.835790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.835816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.835962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.836103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.836128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.836264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.836398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.836423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.836638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.836763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.836788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.836924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.837139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.837164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.837300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.837440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.837466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.837592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.837732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.837760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.837906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.838042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.838068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.838213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.838350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.838375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.838533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.838667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.838692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.838836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.838982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.839008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.839149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.839286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.839313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.839453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.839589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.839614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.839746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.839877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.839902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.840058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.840194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.840223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.840425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.840557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.840583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.906 qpair failed and we were unable to recover it. 00:23:30.906 [2024-05-15 01:00:17.840714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.906 [2024-05-15 01:00:17.840849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.840874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.841062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.841209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.841233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.841361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.841496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.841523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.841659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.841791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.841815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.841972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.842106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.842131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.842271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.842406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.842431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.842567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.842694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.842719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.842866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.843008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.843035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.843168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.843306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.843332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.843474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.843608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.843633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.843765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.843902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.843927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.844073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.844205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.844230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.844371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.844501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.844526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.844679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.844809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.844834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.844981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.845112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.845137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.845276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.845406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.845431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.845562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.845724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.845751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.845900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.846040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.846066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.846199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.846324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.846350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.846504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.846632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.846657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.846793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.846927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.846958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.847136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.847271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.847296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.847441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.847579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.847605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.847745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.847894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.847919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.848068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.848215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.848240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.848378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.848535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.848562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.848695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.848854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.848879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.849021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.849150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.849175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.849308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.849445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.849471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eff6d0 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 A controller has encountered a failure and is being reset. 00:23:30.907 [2024-05-15 01:00:17.849664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.849815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.849842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.850041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.850176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.850201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.850340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.850514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.850542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.850706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.850852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.850878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.851025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.851165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.851190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.851325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.851464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.851492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.851630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.851798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.851826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.851966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.852117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.852144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.852281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.852419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.852444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.852581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.852784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.852812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.852962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.853099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.907 [2024-05-15 01:00:17.853126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.907 qpair failed and we were unable to recover it. 00:23:30.907 [2024-05-15 01:00:17.853269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.853406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.853432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.908 qpair failed and we were unable to recover it. 00:23:30.908 [2024-05-15 01:00:17.853595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.853799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.853824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.908 qpair failed and we were unable to recover it. 00:23:30.908 [2024-05-15 01:00:17.853966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.854104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.854130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.908 qpair failed and we were unable to recover it. 00:23:30.908 [2024-05-15 01:00:17.854265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.854409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.854435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.908 qpair failed and we were unable to recover it. 00:23:30.908 [2024-05-15 01:00:17.854578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.854712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.854774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.908 qpair failed and we were unable to recover it. 00:23:30.908 [2024-05-15 01:00:17.854949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.855092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.855120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.908 qpair failed and we were unable to recover it. 00:23:30.908 [2024-05-15 01:00:17.855257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.855392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.855447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.908 qpair failed and we were unable to recover it. 00:23:30.908 [2024-05-15 01:00:17.855587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.855753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.855780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.908 qpair failed and we were unable to recover it. 00:23:30.908 [2024-05-15 01:00:17.855929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.856112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.856139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e9c000b90 with addr=10.0.0.2, port=4420 00:23:30.908 qpair failed and we were unable to recover it. 00:23:30.908 [2024-05-15 01:00:17.856331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.856481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.908 [2024-05-15 01:00:17.856508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef6190 with addr=10.0.0.2, port=4420 00:23:30.908 [2024-05-15 01:00:17.856526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef6190 is same with the state(5) to be set 00:23:30.908 [2024-05-15 01:00:17.856554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef6190 (9): Bad file descriptor 00:23:30.908 [2024-05-15 01:00:17.856573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:30.908 [2024-05-15 01:00:17.856587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:30.908 [2024-05-15 01:00:17.856605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.908 Unable to reset the controller. 00:23:30.908 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.908 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:23:30.908 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.908 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.908 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.166 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.166 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:31.166 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.166 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.166 Malloc0 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.167 [2024-05-15 01:00:17.965982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.167 [2024-05-15 01:00:17.993984] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:31.167 [2024-05-15 01:00:17.994255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.167 01:00:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.167 01:00:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.167 01:00:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 4088032 00:23:32.100 Controller properly reset. 00:23:37.361 Initializing NVMe Controllers 00:23:37.361 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:37.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:37.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:23:37.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:23:37.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:23:37.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:23:37.361 Initialization complete. Launching workers. 00:23:37.361 Starting thread on core 1 00:23:37.361 Starting thread on core 2 00:23:37.361 Starting thread on core 3 00:23:37.361 Starting thread on core 0 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:23:37.361 00:23:37.361 real 0m10.715s 00:23:37.361 user 0m32.241s 00:23:37.361 sys 0m7.980s 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.361 ************************************ 00:23:37.361 END TEST nvmf_target_disconnect_tc2 00:23:37.361 ************************************ 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.361 rmmod nvme_tcp 00:23:37.361 rmmod nvme_fabrics 00:23:37.361 rmmod nvme_keyring 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 4088348 ']' 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 4088348 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 4088348 ']' 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 4088348 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4088348 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4088348' 00:23:37.361 killing process with pid 4088348 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 4088348 00:23:37.361 [2024-05-15 01:00:23.899150] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:37.361 01:00:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 4088348 00:23:37.361 01:00:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.361 01:00:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.361 01:00:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.361 01:00:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.361 01:00:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.361 01:00:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.361 01:00:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.361 01:00:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.266 01:00:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.266 00:23:39.266 real 0m15.065s 00:23:39.266 user 0m57.190s 00:23:39.266 sys 0m10.155s 00:23:39.266 01:00:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:39.266 01:00:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:39.266 ************************************ 00:23:39.266 END TEST nvmf_target_disconnect 00:23:39.266 ************************************ 00:23:39.266 01:00:26 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:23:39.266 01:00:26 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.266 01:00:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.266 01:00:26 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:23:39.266 00:23:39.266 real 15m44.017s 00:23:39.266 user 38m22.433s 00:23:39.266 sys 4m15.845s 00:23:39.266 01:00:26 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:39.266 01:00:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.266 ************************************ 00:23:39.266 END TEST nvmf_tcp 00:23:39.266 ************************************ 00:23:39.266 01:00:26 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:23:39.266 01:00:26 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:23:39.266 01:00:26 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:39.266 01:00:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:39.266 01:00:26 -- common/autotest_common.sh@10 -- # set +x 00:23:39.266 ************************************ 00:23:39.266 START TEST spdkcli_nvmf_tcp 00:23:39.266 ************************************ 00:23:39.266 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:23:39.524 * Looking for test storage... 00:23:39.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4089287 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4089287 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 4089287 ']' 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:39.524 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.524 [2024-05-15 01:00:26.411840] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:23:39.524 [2024-05-15 01:00:26.411921] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089287 ] 00:23:39.524 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.524 [2024-05-15 01:00:26.470926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:39.783 [2024-05-15 01:00:26.588753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.783 [2024-05-15 01:00:26.588757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.783 01:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:39.783 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:39.783 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:23:39.783 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:23:39.783 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:23:39.783 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:23:39.783 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:23:39.783 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:39.783 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:39.783 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:23:39.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:23:39.783 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:23:39.783 ' 00:23:42.325 [2024-05-15 01:00:29.326379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.697 [2024-05-15 01:00:30.574141] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:43.697 [2024-05-15 01:00:30.574621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:23:46.221 [2024-05-15 01:00:32.889691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:23:48.120 [2024-05-15 01:00:34.871794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:23:49.494 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:23:49.494 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:23:49.494 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:23:49.494 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:23:49.494 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:23:49.494 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:23:49.494 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:23:49.494 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:49.494 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:49.494 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:23:49.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:23:49.494 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:23:49.494 01:00:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:23:49.494 01:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.494 01:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.494 01:00:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:23:49.494 01:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:49.494 01:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.494 01:00:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:23:49.494 01:00:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:23:50.059 01:00:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:23:50.059 01:00:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:23:50.059 01:00:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:23:50.059 01:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.059 01:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.059 01:00:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:23:50.059 01:00:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:50.059 01:00:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.059 01:00:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:23:50.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:23:50.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:50.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:23:50.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:23:50.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:23:50.059 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:23:50.059 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:50.059 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:23:50.059 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:23:50.059 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:23:50.059 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:23:50.059 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:23:50.059 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:23:50.059 ' 00:23:55.335 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:23:55.335 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:23:55.335 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:55.335 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:23:55.335 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:23:55.335 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:23:55.335 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:23:55.335 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:55.335 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:23:55.336 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:23:55.336 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:23:55.336 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:23:55.336 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:23:55.336 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4089287 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 4089287 ']' 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 4089287 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4089287 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4089287' 00:23:55.336 killing process with pid 4089287 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 4089287 00:23:55.336 [2024-05-15 01:00:42.344958] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:55.336 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 4089287 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4089287 ']' 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4089287 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 4089287 ']' 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 4089287 00:23:55.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4089287) - No such process 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 4089287 is not found' 00:23:55.597 Process with pid 4089287 is not found 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:55.597 00:23:55.597 real 0m16.276s 00:23:55.597 user 0m34.668s 00:23:55.597 sys 0m0.816s 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:55.597 01:00:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.597 ************************************ 00:23:55.597 END TEST spdkcli_nvmf_tcp 00:23:55.597 ************************************ 00:23:55.597 01:00:42 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:55.597 01:00:42 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:55.597 01:00:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:55.597 01:00:42 -- common/autotest_common.sh@10 -- # set +x 00:23:55.597 ************************************ 00:23:55.597 START TEST nvmf_identify_passthru 00:23:55.597 ************************************ 00:23:55.597 01:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:55.855 * Looking for test storage... 00:23:55.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:55.855 01:00:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.855 01:00:42 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.855 01:00:42 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.855 01:00:42 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.855 01:00:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.855 01:00:42 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.855 01:00:42 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.855 01:00:42 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:55.855 01:00:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.855 01:00:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.855 01:00:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:55.855 01:00:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.855 01:00:42 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.855 01:00:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:57.232 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:57.232 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:57.232 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:57.233 Found net devices under 0000:08:00.0: cvl_0_0 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:57.233 Found net devices under 0000:08:00.1: cvl_0_1 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.233 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:23:57.493 00:23:57.493 --- 10.0.0.2 ping statistics --- 00:23:57.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.493 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:23:57.493 00:23:57.493 --- 10.0.0.1 ping statistics --- 00:23:57.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.493 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.493 01:00:44 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.493 01:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:57.493 01:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:23:57.493 01:00:44 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:84:00.0 00:23:57.493 01:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:23:57.493 01:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:23:57.493 01:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:23:57.493 01:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:23:57.493 01:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:23:57.493 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.679 01:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:24:01.679 01:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:24:01.679 01:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:01.679 01:00:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:01.679 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.870 01:00:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:24:05.870 01:00:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:05.870 01:00:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:05.870 01:00:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4092837 00:24:05.870 01:00:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:05.870 01:00:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.870 01:00:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4092837 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 4092837 ']' 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:05.870 01:00:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:05.870 [2024-05-15 01:00:52.822811] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:24:05.870 [2024-05-15 01:00:52.822915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.870 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.870 [2024-05-15 01:00:52.894801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.150 [2024-05-15 01:00:53.015681] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.150 [2024-05-15 01:00:53.015741] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.150 [2024-05-15 01:00:53.015756] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.150 [2024-05-15 01:00:53.015769] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.150 [2024-05-15 01:00:53.015780] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.150 [2024-05-15 01:00:53.015845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.150 [2024-05-15 01:00:53.015869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.150 [2024-05-15 01:00:53.015916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.150 [2024-05-15 01:00:53.015920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:24:06.150 01:00:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:06.150 INFO: Log level set to 20 00:24:06.150 INFO: Requests: 00:24:06.150 { 00:24:06.150 "jsonrpc": "2.0", 00:24:06.150 "method": "nvmf_set_config", 00:24:06.150 "id": 1, 00:24:06.150 "params": { 00:24:06.150 "admin_cmd_passthru": { 00:24:06.150 "identify_ctrlr": true 00:24:06.150 } 00:24:06.150 } 00:24:06.150 } 00:24:06.150 00:24:06.150 INFO: response: 00:24:06.150 { 00:24:06.150 "jsonrpc": "2.0", 00:24:06.150 "id": 1, 00:24:06.150 "result": true 00:24:06.150 } 00:24:06.150 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.150 01:00:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:06.150 INFO: Setting log level to 20 00:24:06.150 INFO: Setting log level to 20 00:24:06.150 INFO: Log level set to 20 00:24:06.150 INFO: Log level set to 20 00:24:06.150 INFO: Requests: 00:24:06.150 { 00:24:06.150 "jsonrpc": "2.0", 00:24:06.150 "method": "framework_start_init", 00:24:06.150 "id": 1 00:24:06.150 } 00:24:06.150 00:24:06.150 INFO: Requests: 00:24:06.150 { 00:24:06.150 "jsonrpc": "2.0", 00:24:06.150 "method": "framework_start_init", 00:24:06.150 "id": 1 00:24:06.150 } 00:24:06.150 00:24:06.150 [2024-05-15 01:00:53.182135] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:24:06.150 INFO: response: 00:24:06.150 { 00:24:06.150 "jsonrpc": "2.0", 00:24:06.150 "id": 1, 00:24:06.150 "result": true 00:24:06.150 } 00:24:06.150 00:24:06.150 INFO: response: 00:24:06.150 { 00:24:06.150 "jsonrpc": "2.0", 00:24:06.150 "id": 1, 00:24:06.150 "result": true 00:24:06.150 } 00:24:06.150 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.150 01:00:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.150 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:06.150 INFO: Setting log level to 40 00:24:06.150 INFO: Setting log level to 40 00:24:06.150 INFO: Setting log level to 40 00:24:06.150 [2024-05-15 01:00:53.192098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.414 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.414 01:00:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:24:06.414 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.414 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:06.414 01:00:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:24:06.414 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.414 01:00:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:09.698 Nvme0n1 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:09.698 [2024-05-15 01:00:56.067081] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:09.698 [2024-05-15 01:00:56.067375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:09.698 [ 00:24:09.698 { 00:24:09.698 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:09.698 "subtype": "Discovery", 00:24:09.698 "listen_addresses": [], 00:24:09.698 "allow_any_host": true, 00:24:09.698 "hosts": [] 00:24:09.698 }, 00:24:09.698 { 00:24:09.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.698 "subtype": "NVMe", 00:24:09.698 "listen_addresses": [ 00:24:09.698 { 00:24:09.698 "trtype": "TCP", 00:24:09.698 "adrfam": "IPv4", 00:24:09.698 "traddr": "10.0.0.2", 00:24:09.698 "trsvcid": "4420" 00:24:09.698 } 00:24:09.698 ], 00:24:09.698 "allow_any_host": true, 00:24:09.698 "hosts": [], 00:24:09.698 "serial_number": "SPDK00000000000001", 00:24:09.698 "model_number": "SPDK bdev Controller", 00:24:09.698 "max_namespaces": 1, 00:24:09.698 "min_cntlid": 1, 00:24:09.698 "max_cntlid": 65519, 00:24:09.698 "namespaces": [ 00:24:09.698 { 00:24:09.698 "nsid": 1, 00:24:09.698 "bdev_name": "Nvme0n1", 00:24:09.698 "name": "Nvme0n1", 00:24:09.698 "nguid": "B2CF9BFACDB24265A2A40F1B91546DE7", 00:24:09.698 "uuid": "b2cf9bfa-cdb2-4265-a2a4-0f1b91546de7" 00:24:09.698 } 00:24:09.698 ] 00:24:09.698 } 00:24:09.698 ] 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:24:09.698 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:24:09.698 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:24:09.698 01:00:56 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.698 rmmod nvme_tcp 00:24:09.698 rmmod nvme_fabrics 00:24:09.698 rmmod nvme_keyring 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 4092837 ']' 00:24:09.698 01:00:56 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 4092837 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 4092837 ']' 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 4092837 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4092837 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4092837' 00:24:09.698 killing process with pid 4092837 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 4092837 00:24:09.698 [2024-05-15 01:00:56.408909] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:09.698 01:00:56 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 4092837 00:24:11.069 01:00:57 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:11.069 01:00:57 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:11.069 01:00:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:11.069 01:00:57 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.069 01:00:57 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:11.069 01:00:57 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.069 01:00:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:11.069 01:00:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.976 01:00:59 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:12.976 00:24:12.976 real 0m17.358s 00:24:12.976 user 0m25.958s 00:24:12.976 sys 0m1.951s 00:24:12.976 01:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:12.976 01:00:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:12.976 ************************************ 00:24:12.976 END TEST nvmf_identify_passthru 00:24:12.976 ************************************ 00:24:12.976 01:01:00 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:24:12.976 01:01:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:12.976 01:01:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:12.976 01:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:13.235 ************************************ 00:24:13.235 START TEST nvmf_dif 00:24:13.235 ************************************ 00:24:13.235 01:01:00 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:24:13.235 * Looking for test storage... 00:24:13.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:13.235 01:01:00 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.235 01:01:00 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.235 01:01:00 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.235 01:01:00 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.235 01:01:00 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.235 01:01:00 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.235 01:01:00 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.236 01:01:00 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.236 01:01:00 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:13.236 01:01:00 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.236 01:01:00 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:13.236 01:01:00 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:13.236 01:01:00 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:13.236 01:01:00 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:13.236 01:01:00 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.236 01:01:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:13.236 01:01:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:13.236 01:01:00 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:24:13.236 01:01:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:15.141 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:15.141 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:15.141 Found net devices under 0000:08:00.0: cvl_0_0 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:15.141 Found net devices under 0000:08:00.1: cvl_0_1 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:15.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:24:15.141 00:24:15.141 --- 10.0.0.2 ping statistics --- 00:24:15.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.141 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:24:15.141 00:24:15.141 --- 10.0.0.1 ping statistics --- 00:24:15.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.141 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:15.141 01:01:01 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:15.709 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:24:15.709 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:15.709 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:24:15.709 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:24:15.709 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:24:15.709 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:24:15.709 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:24:15.709 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:24:15.709 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:24:15.709 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:24:15.709 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:24:15.709 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:24:15.709 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:24:15.709 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:24:15.709 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:24:15.709 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:24:15.709 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.967 01:01:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:15.967 01:01:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.967 01:01:02 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:15.967 01:01:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=4095297 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:15.967 01:01:02 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 4095297 00:24:15.967 01:01:02 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 4095297 ']' 00:24:15.967 01:01:02 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.967 01:01:02 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:15.967 01:01:02 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.967 01:01:02 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:15.967 01:01:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:15.967 [2024-05-15 01:01:02.890090] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:24:15.967 [2024-05-15 01:01:02.890180] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.967 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.967 [2024-05-15 01:01:02.953646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.225 [2024-05-15 01:01:03.069582] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.225 [2024-05-15 01:01:03.069645] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.225 [2024-05-15 01:01:03.069661] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.225 [2024-05-15 01:01:03.069674] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.225 [2024-05-15 01:01:03.069685] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.225 [2024-05-15 01:01:03.069714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:24:16.225 01:01:03 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:16.225 01:01:03 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.225 01:01:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:16.225 01:01:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:16.225 [2024-05-15 01:01:03.205869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.225 01:01:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:16.225 01:01:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:16.225 ************************************ 00:24:16.225 START TEST fio_dif_1_default 00:24:16.225 ************************************ 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:16.225 bdev_null0 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:16.225 [2024-05-15 01:01:03.273979] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:16.225 [2024-05-15 01:01:03.274231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:16.225 { 00:24:16.225 "params": { 00:24:16.225 "name": "Nvme$subsystem", 00:24:16.225 "trtype": "$TEST_TRANSPORT", 00:24:16.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:16.225 "adrfam": "ipv4", 00:24:16.225 "trsvcid": "$NVMF_PORT", 00:24:16.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:16.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:16.225 "hdgst": ${hdgst:-false}, 00:24:16.225 "ddgst": ${ddgst:-false} 00:24:16.225 }, 00:24:16.225 "method": "bdev_nvme_attach_controller" 00:24:16.225 } 00:24:16.225 EOF 00:24:16.225 )") 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:16.225 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:16.482 "params": { 00:24:16.482 "name": "Nvme0", 00:24:16.482 "trtype": "tcp", 00:24:16.482 "traddr": "10.0.0.2", 00:24:16.482 "adrfam": "ipv4", 00:24:16.482 "trsvcid": "4420", 00:24:16.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:16.482 "hdgst": false, 00:24:16.482 "ddgst": false 00:24:16.482 }, 00:24:16.482 "method": "bdev_nvme_attach_controller" 00:24:16.482 }' 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:16.482 01:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.482 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:16.482 fio-3.35 00:24:16.482 Starting 1 thread 00:24:16.739 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.929 00:24:28.929 filename0: (groupid=0, jobs=1): err= 0: pid=4095548: Wed May 15 01:01:14 2024 00:24:28.929 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10020msec) 00:24:28.929 slat (nsec): min=7599, max=40597, avg=9279.42, stdev=3255.73 00:24:28.929 clat (usec): min=40901, max=42942, avg=41036.73, stdev=268.55 00:24:28.929 lat (usec): min=40910, max=42974, avg=41046.01, stdev=269.52 00:24:28.929 clat percentiles (usec): 00:24:28.929 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:24:28.929 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:24:28.929 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:24:28.929 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:24:28.929 | 99.99th=[42730] 00:24:28.929 bw ( KiB/s): min= 383, max= 416, per=99.58%, avg=388.75, stdev=11.75, samples=20 00:24:28.929 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 00:24:28.929 lat (msec) : 50=100.00% 00:24:28.929 cpu : usr=90.20%, sys=9.48%, ctx=23, majf=0, minf=151 00:24:28.929 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.929 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.929 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:28.929 00:24:28.929 Run status group 0 (all jobs): 00:24:28.929 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10020-10020msec 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.929 00:24:28.929 real 0m11.203s 00:24:28.929 user 0m10.062s 00:24:28.929 sys 0m1.206s 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:28.929 ************************************ 00:24:28.929 END TEST fio_dif_1_default 00:24:28.929 ************************************ 00:24:28.929 01:01:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:28.929 01:01:14 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:28.929 01:01:14 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:28.929 01:01:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:28.929 ************************************ 00:24:28.929 START TEST fio_dif_1_multi_subsystems 00:24:28.929 ************************************ 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:28.929 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 bdev_null0 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 [2024-05-15 01:01:14.535084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 bdev_null1 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.930 { 00:24:28.930 "params": { 00:24:28.930 "name": "Nvme$subsystem", 00:24:28.930 "trtype": "$TEST_TRANSPORT", 00:24:28.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.930 "adrfam": "ipv4", 00:24:28.930 "trsvcid": "$NVMF_PORT", 00:24:28.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.930 "hdgst": ${hdgst:-false}, 00:24:28.930 "ddgst": ${ddgst:-false} 00:24:28.930 }, 00:24:28.930 "method": "bdev_nvme_attach_controller" 00:24:28.930 } 00:24:28.930 EOF 00:24:28.930 )") 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:28.930 { 00:24:28.930 "params": { 00:24:28.930 "name": "Nvme$subsystem", 00:24:28.930 "trtype": "$TEST_TRANSPORT", 00:24:28.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.930 "adrfam": "ipv4", 00:24:28.930 "trsvcid": "$NVMF_PORT", 00:24:28.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.930 "hdgst": ${hdgst:-false}, 00:24:28.930 "ddgst": ${ddgst:-false} 00:24:28.930 }, 00:24:28.930 "method": "bdev_nvme_attach_controller" 00:24:28.930 } 00:24:28.930 EOF 00:24:28.930 )") 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:28.930 "params": { 00:24:28.930 "name": "Nvme0", 00:24:28.930 "trtype": "tcp", 00:24:28.930 "traddr": "10.0.0.2", 00:24:28.930 "adrfam": "ipv4", 00:24:28.930 "trsvcid": "4420", 00:24:28.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:28.930 "hdgst": false, 00:24:28.930 "ddgst": false 00:24:28.930 }, 00:24:28.930 "method": "bdev_nvme_attach_controller" 00:24:28.930 },{ 00:24:28.930 "params": { 00:24:28.930 "name": "Nvme1", 00:24:28.930 "trtype": "tcp", 00:24:28.930 "traddr": "10.0.0.2", 00:24:28.930 "adrfam": "ipv4", 00:24:28.930 "trsvcid": "4420", 00:24:28.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.930 "hdgst": false, 00:24:28.930 "ddgst": false 00:24:28.930 }, 00:24:28.930 "method": "bdev_nvme_attach_controller" 00:24:28.930 }' 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:28.930 01:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:28.930 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:28.930 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:28.930 fio-3.35 00:24:28.930 Starting 2 threads 00:24:28.930 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.903 00:24:38.903 filename0: (groupid=0, jobs=1): err= 0: pid=4096625: Wed May 15 01:01:25 2024 00:24:38.903 read: IOPS=189, BW=756KiB/s (775kB/s)(7584KiB/10027msec) 00:24:38.903 slat (nsec): min=7734, max=55634, avg=9153.46, stdev=2555.34 00:24:38.903 clat (usec): min=821, max=42735, avg=21126.23, stdev=20088.99 00:24:38.903 lat (usec): min=829, max=42789, avg=21135.38, stdev=20088.87 00:24:38.903 clat percentiles (usec): 00:24:38.903 | 1.00th=[ 857], 5.00th=[ 898], 10.00th=[ 914], 20.00th=[ 922], 00:24:38.903 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:24:38.903 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:24:38.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:24:38.903 | 99.99th=[42730] 00:24:38.903 bw ( KiB/s): min= 672, max= 768, per=50.01%, avg=756.80, stdev=28.00, samples=20 00:24:38.903 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:24:38.903 lat (usec) : 1000=44.57% 00:24:38.903 lat (msec) : 2=5.22%, 50=50.21% 00:24:38.903 cpu : usr=93.70%, sys=5.95%, ctx=13, majf=0, minf=232 00:24:38.903 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:38.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.903 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.903 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:38.903 filename1: (groupid=0, jobs=1): err= 0: pid=4096626: Wed May 15 01:01:25 2024 00:24:38.903 read: IOPS=188, BW=756KiB/s (774kB/s)(7584KiB/10034msec) 00:24:38.903 slat (nsec): min=7721, max=53696, avg=9184.11, stdev=2391.82 00:24:38.903 clat (usec): min=807, max=42758, avg=21140.80, stdev=20088.99 00:24:38.903 lat (usec): min=815, max=42812, avg=21149.99, stdev=20088.93 00:24:38.903 clat percentiles (usec): 00:24:38.903 | 1.00th=[ 873], 5.00th=[ 898], 10.00th=[ 906], 20.00th=[ 914], 00:24:38.903 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[41157], 60.00th=[41157], 00:24:38.903 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:24:38.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:24:38.903 | 99.99th=[42730] 00:24:38.903 bw ( KiB/s): min= 672, max= 768, per=50.01%, avg=756.80, stdev=26.01, samples=20 00:24:38.903 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:24:38.903 lat (usec) : 1000=41.93% 00:24:38.903 lat (msec) : 2=7.86%, 50=50.21% 00:24:38.903 cpu : usr=94.18%, sys=5.48%, ctx=14, majf=0, minf=50 00:24:38.903 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:38.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.903 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.903 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:38.903 00:24:38.903 Run status group 0 (all jobs): 00:24:38.903 READ: bw=1512KiB/s (1548kB/s), 756KiB/s-756KiB/s (774kB/s-775kB/s), io=14.8MiB (15.5MB), run=10027-10034msec 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.903 00:24:38.903 real 0m11.293s 00:24:38.903 user 0m19.972s 00:24:38.903 sys 0m1.409s 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:38.903 01:01:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:38.903 ************************************ 00:24:38.903 END TEST fio_dif_1_multi_subsystems 00:24:38.903 ************************************ 00:24:38.903 01:01:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:38.903 01:01:25 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:38.903 01:01:25 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:38.903 01:01:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:38.903 ************************************ 00:24:38.903 START TEST fio_dif_rand_params 00:24:38.903 ************************************ 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:38.903 bdev_null0 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:38.903 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:38.904 [2024-05-15 01:01:25.882327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:38.904 { 00:24:38.904 "params": { 00:24:38.904 "name": "Nvme$subsystem", 00:24:38.904 "trtype": "$TEST_TRANSPORT", 00:24:38.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:38.904 "adrfam": "ipv4", 00:24:38.904 "trsvcid": "$NVMF_PORT", 00:24:38.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:38.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:38.904 "hdgst": ${hdgst:-false}, 00:24:38.904 "ddgst": ${ddgst:-false} 00:24:38.904 }, 00:24:38.904 "method": "bdev_nvme_attach_controller" 00:24:38.904 } 00:24:38.904 EOF 00:24:38.904 )") 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:38.904 "params": { 00:24:38.904 "name": "Nvme0", 00:24:38.904 "trtype": "tcp", 00:24:38.904 "traddr": "10.0.0.2", 00:24:38.904 "adrfam": "ipv4", 00:24:38.904 "trsvcid": "4420", 00:24:38.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:38.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:38.904 "hdgst": false, 00:24:38.904 "ddgst": false 00:24:38.904 }, 00:24:38.904 "method": "bdev_nvme_attach_controller" 00:24:38.904 }' 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:38.904 01:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:39.163 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:39.163 ... 00:24:39.163 fio-3.35 00:24:39.163 Starting 3 threads 00:24:39.163 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.717 00:24:45.717 filename0: (groupid=0, jobs=1): err= 0: pid=4097690: Wed May 15 01:01:31 2024 00:24:45.717 read: IOPS=180, BW=22.6MiB/s (23.7MB/s)(114MiB/5037msec) 00:24:45.717 slat (nsec): min=5772, max=40481, avg=13224.66, stdev=4233.71 00:24:45.717 clat (usec): min=5529, max=91622, avg=16601.53, stdev=14448.44 00:24:45.717 lat (usec): min=5540, max=91646, avg=16614.76, stdev=14448.94 00:24:45.717 clat percentiles (usec): 00:24:45.717 | 1.00th=[ 6194], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 9372], 00:24:45.717 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[11076], 60.00th=[12649], 00:24:45.717 | 70.00th=[13698], 80.00th=[14877], 90.00th=[50070], 95.00th=[53216], 00:24:45.717 | 99.00th=[55313], 99.50th=[57410], 99.90th=[91751], 99.95th=[91751], 00:24:45.717 | 99.99th=[91751] 00:24:45.717 bw ( KiB/s): min=17664, max=30208, per=33.01%, avg=23198.90, stdev=3980.96, samples=10 00:24:45.717 iops : min= 138, max= 236, avg=181.20, stdev=31.06, samples=10 00:24:45.717 lat (msec) : 10=34.21%, 20=52.04%, 50=3.08%, 100=10.67% 00:24:45.717 cpu : usr=92.95%, sys=6.43%, ctx=67, majf=0, minf=115 00:24:45.717 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:45.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.717 issued rwts: total=909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.717 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:45.717 filename0: (groupid=0, jobs=1): err= 0: pid=4097691: Wed May 15 01:01:31 2024 00:24:45.717 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(116MiB/5043msec) 00:24:45.717 slat (nsec): min=5757, max=71632, avg=12102.97, stdev=3121.56 00:24:45.717 clat (usec): min=5568, max=95140, avg=16187.28, stdev=14231.37 00:24:45.717 lat (usec): min=5580, max=95152, avg=16199.38, stdev=14231.38 00:24:45.717 clat percentiles (usec): 00:24:45.717 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 7504], 20.00th=[ 9372], 00:24:45.717 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11207], 60.00th=[12649], 00:24:45.717 | 70.00th=[13829], 80.00th=[15270], 90.00th=[50594], 95.00th=[53216], 00:24:45.717 | 99.00th=[56361], 99.50th=[59507], 99.90th=[94897], 99.95th=[94897], 00:24:45.717 | 99.99th=[94897] 00:24:45.717 bw ( KiB/s): min=19968, max=26880, per=33.84%, avg=23786.80, stdev=2356.96, samples=10 00:24:45.717 iops : min= 156, max= 210, avg=185.80, stdev=18.44, samples=10 00:24:45.717 lat (msec) : 10=33.08%, 20=55.21%, 50=0.86%, 100=10.85% 00:24:45.717 cpu : usr=93.14%, sys=6.47%, ctx=10, majf=0, minf=192 00:24:45.717 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:45.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.717 issued rwts: total=931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.717 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:45.717 filename0: (groupid=0, jobs=1): err= 0: pid=4097692: Wed May 15 01:01:31 2024 00:24:45.717 read: IOPS=185, BW=23.2MiB/s (24.3MB/s)(116MiB/5013msec) 00:24:45.717 slat (nsec): min=5805, max=48954, avg=13350.28, stdev=4868.70 00:24:45.717 clat (usec): min=5656, max=94605, avg=16166.51, stdev=14486.85 00:24:45.717 lat (usec): min=5680, max=94617, avg=16179.86, stdev=14486.95 00:24:45.717 clat percentiles (usec): 00:24:45.717 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 8979], 00:24:45.717 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10945], 60.00th=[12649], 00:24:45.717 | 70.00th=[13829], 80.00th=[15139], 90.00th=[50070], 95.00th=[53740], 00:24:45.717 | 99.00th=[57410], 99.50th=[58459], 99.90th=[94897], 99.95th=[94897], 00:24:45.717 | 99.99th=[94897] 00:24:45.717 bw ( KiB/s): min=19968, max=28672, per=33.73%, avg=23705.60, stdev=3081.94, samples=10 00:24:45.717 iops : min= 156, max= 224, avg=185.20, stdev=24.08, samples=10 00:24:45.717 lat (msec) : 10=37.14%, 20=51.02%, 50=2.15%, 100=9.69% 00:24:45.717 cpu : usr=92.50%, sys=6.66%, ctx=99, majf=0, minf=53 00:24:45.717 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:45.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.717 issued rwts: total=929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.717 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:45.717 00:24:45.717 Run status group 0 (all jobs): 00:24:45.717 READ: bw=68.6MiB/s (72.0MB/s), 22.6MiB/s-23.2MiB/s (23.7MB/s-24.3MB/s), io=346MiB (363MB), run=5013-5043msec 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.717 bdev_null0 00:24:45.717 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 [2024-05-15 01:01:31.995914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 bdev_null1 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 bdev_null2 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:45.718 { 00:24:45.718 "params": { 00:24:45.718 "name": "Nvme$subsystem", 00:24:45.718 "trtype": "$TEST_TRANSPORT", 00:24:45.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.718 "adrfam": "ipv4", 00:24:45.718 "trsvcid": "$NVMF_PORT", 00:24:45.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.718 "hdgst": ${hdgst:-false}, 00:24:45.718 "ddgst": ${ddgst:-false} 00:24:45.718 }, 00:24:45.718 "method": "bdev_nvme_attach_controller" 00:24:45.718 } 00:24:45.718 EOF 00:24:45.718 )") 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:45.718 { 00:24:45.718 "params": { 00:24:45.718 "name": "Nvme$subsystem", 00:24:45.718 "trtype": "$TEST_TRANSPORT", 00:24:45.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.718 "adrfam": "ipv4", 00:24:45.718 "trsvcid": "$NVMF_PORT", 00:24:45.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.718 "hdgst": ${hdgst:-false}, 00:24:45.718 "ddgst": ${ddgst:-false} 00:24:45.718 }, 00:24:45.718 "method": "bdev_nvme_attach_controller" 00:24:45.718 } 00:24:45.718 EOF 00:24:45.718 )") 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:45.718 { 00:24:45.718 "params": { 00:24:45.718 "name": "Nvme$subsystem", 00:24:45.718 "trtype": "$TEST_TRANSPORT", 00:24:45.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.718 "adrfam": "ipv4", 00:24:45.718 "trsvcid": "$NVMF_PORT", 00:24:45.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.718 "hdgst": ${hdgst:-false}, 00:24:45.718 "ddgst": ${ddgst:-false} 00:24:45.718 }, 00:24:45.718 "method": "bdev_nvme_attach_controller" 00:24:45.718 } 00:24:45.718 EOF 00:24:45.718 )") 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:45.718 01:01:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:45.718 "params": { 00:24:45.718 "name": "Nvme0", 00:24:45.718 "trtype": "tcp", 00:24:45.718 "traddr": "10.0.0.2", 00:24:45.718 "adrfam": "ipv4", 00:24:45.718 "trsvcid": "4420", 00:24:45.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:45.718 "hdgst": false, 00:24:45.718 "ddgst": false 00:24:45.718 }, 00:24:45.719 "method": "bdev_nvme_attach_controller" 00:24:45.719 },{ 00:24:45.719 "params": { 00:24:45.719 "name": "Nvme1", 00:24:45.719 "trtype": "tcp", 00:24:45.719 "traddr": "10.0.0.2", 00:24:45.719 "adrfam": "ipv4", 00:24:45.719 "trsvcid": "4420", 00:24:45.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.719 "hdgst": false, 00:24:45.719 "ddgst": false 00:24:45.719 }, 00:24:45.719 "method": "bdev_nvme_attach_controller" 00:24:45.719 },{ 00:24:45.719 "params": { 00:24:45.719 "name": "Nvme2", 00:24:45.719 "trtype": "tcp", 00:24:45.719 "traddr": "10.0.0.2", 00:24:45.719 "adrfam": "ipv4", 00:24:45.719 "trsvcid": "4420", 00:24:45.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:45.719 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:45.719 "hdgst": false, 00:24:45.719 "ddgst": false 00:24:45.719 }, 00:24:45.719 "method": "bdev_nvme_attach_controller" 00:24:45.719 }' 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:45.719 01:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:45.719 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:45.719 ... 00:24:45.719 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:45.719 ... 00:24:45.719 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:45.719 ... 00:24:45.719 fio-3.35 00:24:45.719 Starting 24 threads 00:24:45.719 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.923 00:24:57.923 filename0: (groupid=0, jobs=1): err= 0: pid=4098341: Wed May 15 01:01:43 2024 00:24:57.923 read: IOPS=39, BW=158KiB/s (162kB/s)(1600KiB/10096msec) 00:24:57.923 slat (usec): min=8, max=153, avg=40.81, stdev=37.07 00:24:57.923 clat (msec): min=344, max=779, avg=403.56, stdev=80.52 00:24:57.923 lat (msec): min=344, max=779, avg=403.60, stdev=80.52 00:24:57.923 clat percentiles (msec): 00:24:57.923 | 1.00th=[ 347], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 368], 00:24:57.923 | 30.00th=[ 376], 40.00th=[ 384], 50.00th=[ 388], 60.00th=[ 393], 00:24:57.923 | 70.00th=[ 397], 80.00th=[ 401], 90.00th=[ 422], 95.00th=[ 468], 00:24:57.923 | 99.00th=[ 776], 99.50th=[ 776], 99.90th=[ 776], 99.95th=[ 776], 00:24:57.923 | 99.99th=[ 776] 00:24:57.923 bw ( KiB/s): min= 128, max= 256, per=3.86%, avg=161.68, stdev=57.91, samples=19 00:24:57.923 iops : min= 32, max= 64, avg=40.42, stdev=14.48, samples=19 00:24:57.923 lat (msec) : 500=96.00%, 1000=4.00% 00:24:57.923 cpu : usr=98.73%, sys=0.88%, ctx=14, majf=0, minf=27 00:24:57.923 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.923 filename0: (groupid=0, jobs=1): err= 0: pid=4098342: Wed May 15 01:01:43 2024 00:24:57.923 read: IOPS=42, BW=171KiB/s (175kB/s)(1728KiB/10127msec) 00:24:57.923 slat (usec): min=10, max=171, avg=86.17, stdev=38.56 00:24:57.923 clat (msec): min=144, max=563, avg=374.38, stdev=65.33 00:24:57.923 lat (msec): min=144, max=563, avg=374.46, stdev=65.33 00:24:57.923 clat percentiles (msec): 00:24:57.923 | 1.00th=[ 146], 5.00th=[ 249], 10.00th=[ 288], 20.00th=[ 359], 00:24:57.923 | 30.00th=[ 372], 40.00th=[ 380], 50.00th=[ 388], 60.00th=[ 393], 00:24:57.923 | 70.00th=[ 397], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 456], 00:24:57.923 | 99.00th=[ 531], 99.50th=[ 550], 99.90th=[ 567], 99.95th=[ 567], 00:24:57.923 | 99.99th=[ 567] 00:24:57.923 bw ( KiB/s): min= 112, max= 256, per=3.98%, avg=166.30, stdev=58.89, samples=20 00:24:57.923 iops : min= 28, max= 64, avg=41.50, stdev=14.78, samples=20 00:24:57.923 lat (msec) : 250=5.09%, 500=92.59%, 750=2.31% 00:24:57.923 cpu : usr=98.45%, sys=1.10%, ctx=27, majf=0, minf=20 00:24:57.923 IO depths : 1=3.5%, 2=9.5%, 4=24.3%, 8=53.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:24:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.923 filename0: (groupid=0, jobs=1): err= 0: pid=4098343: Wed May 15 01:01:43 2024 00:24:57.923 read: IOPS=39, BW=160KiB/s (163kB/s)(1600KiB/10025msec) 00:24:57.923 slat (usec): min=13, max=175, avg=111.00, stdev=30.02 00:24:57.923 clat (msec): min=227, max=686, avg=400.06, stdev=64.68 00:24:57.923 lat (msec): min=227, max=687, avg=400.17, stdev=64.67 00:24:57.923 clat percentiles (msec): 00:24:57.923 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 372], 00:24:57.923 | 30.00th=[ 380], 40.00th=[ 384], 50.00th=[ 393], 60.00th=[ 397], 00:24:57.923 | 70.00th=[ 401], 80.00th=[ 405], 90.00th=[ 409], 95.00th=[ 456], 00:24:57.923 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:24:57.923 | 99.99th=[ 684] 00:24:57.923 bw ( KiB/s): min= 127, max= 256, per=3.86%, avg=161.63, stdev=57.94, samples=19 00:24:57.923 iops : min= 31, max= 64, avg=40.37, stdev=14.51, samples=19 00:24:57.923 lat (msec) : 250=0.50%, 500=95.00%, 750=4.50% 00:24:57.923 cpu : usr=98.54%, sys=0.96%, ctx=18, majf=0, minf=17 00:24:57.923 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:24:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.923 filename0: (groupid=0, jobs=1): err= 0: pid=4098344: Wed May 15 01:01:43 2024 00:24:57.923 read: IOPS=44, BW=178KiB/s (183kB/s)(1792KiB/10046msec) 00:24:57.923 slat (usec): min=9, max=1007, avg=27.81, stdev=49.38 00:24:57.923 clat (msec): min=244, max=473, avg=358.54, stdev=53.55 00:24:57.923 lat (msec): min=244, max=473, avg=358.57, stdev=53.55 00:24:57.923 clat percentiles (msec): 00:24:57.923 | 1.00th=[ 245], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 305], 00:24:57.923 | 30.00th=[ 368], 40.00th=[ 376], 50.00th=[ 380], 60.00th=[ 393], 00:24:57.923 | 70.00th=[ 397], 80.00th=[ 397], 90.00th=[ 405], 95.00th=[ 409], 00:24:57.923 | 99.00th=[ 409], 99.50th=[ 414], 99.90th=[ 472], 99.95th=[ 472], 00:24:57.923 | 99.99th=[ 472] 00:24:57.923 bw ( KiB/s): min= 127, max= 256, per=4.13%, avg=172.70, stdev=58.03, samples=20 00:24:57.923 iops : min= 31, max= 64, avg=43.10, stdev=14.57, samples=20 00:24:57.923 lat (msec) : 250=9.82%, 500=90.18% 00:24:57.923 cpu : usr=97.32%, sys=1.47%, ctx=62, majf=0, minf=21 00:24:57.923 IO depths : 1=2.9%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:24:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.923 filename0: (groupid=0, jobs=1): err= 0: pid=4098345: Wed May 15 01:01:43 2024 00:24:57.923 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10150msec) 00:24:57.923 slat (usec): min=4, max=163, avg=50.12, stdev=47.35 00:24:57.923 clat (msec): min=51, max=483, avg=305.12, stdev=82.90 00:24:57.923 lat (msec): min=51, max=483, avg=305.17, stdev=82.93 00:24:57.923 clat percentiles (msec): 00:24:57.923 | 1.00th=[ 52], 5.00th=[ 104], 10.00th=[ 226], 20.00th=[ 249], 00:24:57.923 | 30.00th=[ 259], 40.00th=[ 279], 50.00th=[ 305], 60.00th=[ 351], 00:24:57.923 | 70.00th=[ 372], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 409], 00:24:57.923 | 99.00th=[ 414], 99.50th=[ 418], 99.90th=[ 485], 99.95th=[ 485], 00:24:57.923 | 99.99th=[ 485] 00:24:57.923 bw ( KiB/s): min= 127, max= 368, per=4.90%, avg=204.70, stdev=69.99, samples=20 00:24:57.923 iops : min= 31, max= 92, avg=51.10, stdev=17.52, samples=20 00:24:57.923 lat (msec) : 100=2.65%, 250=20.45%, 500=76.89% 00:24:57.923 cpu : usr=98.16%, sys=1.23%, ctx=45, majf=0, minf=31 00:24:57.923 IO depths : 1=0.4%, 2=6.4%, 4=24.4%, 8=56.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:24:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 complete : 0=0.0%, 4=94.3%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.923 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.923 filename0: (groupid=0, jobs=1): err= 0: pid=4098346: Wed May 15 01:01:43 2024 00:24:57.923 read: IOPS=45, BW=183KiB/s (188kB/s)(1856KiB/10124msec) 00:24:57.923 slat (usec): min=9, max=261, avg=75.73, stdev=52.22 00:24:57.923 clat (msec): min=191, max=579, avg=346.14, stdev=69.36 00:24:57.923 lat (msec): min=191, max=579, avg=346.21, stdev=69.39 00:24:57.923 clat percentiles (msec): 00:24:57.923 | 1.00th=[ 201], 5.00th=[ 226], 10.00th=[ 255], 20.00th=[ 268], 00:24:57.923 | 30.00th=[ 296], 40.00th=[ 342], 50.00th=[ 376], 60.00th=[ 384], 00:24:57.923 | 70.00th=[ 393], 80.00th=[ 397], 90.00th=[ 405], 95.00th=[ 409], 00:24:57.923 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 584], 00:24:57.923 | 99.99th=[ 584] 00:24:57.923 bw ( KiB/s): min= 112, max= 256, per=4.30%, avg=179.10, stdev=59.98, samples=20 00:24:57.923 iops : min= 28, max= 64, avg=44.70, stdev=14.98, samples=20 00:24:57.923 lat (msec) : 250=6.47%, 500=91.81%, 750=1.72% 00:24:57.923 cpu : usr=96.73%, sys=1.80%, ctx=218, majf=0, minf=22 00:24:57.923 IO depths : 1=2.8%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:24:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.924 filename0: (groupid=0, jobs=1): err= 0: pid=4098347: Wed May 15 01:01:43 2024 00:24:57.924 read: IOPS=47, BW=191KiB/s (195kB/s)(1920KiB/10067msec) 00:24:57.924 slat (usec): min=4, max=155, avg=57.86, stdev=45.57 00:24:57.924 clat (msec): min=26, max=490, avg=333.19, stdev=93.42 00:24:57.924 lat (msec): min=26, max=490, avg=333.25, stdev=93.45 00:24:57.924 clat percentiles (msec): 00:24:57.924 | 1.00th=[ 27], 5.00th=[ 87], 10.00th=[ 251], 20.00th=[ 268], 00:24:57.924 | 30.00th=[ 313], 40.00th=[ 359], 50.00th=[ 372], 60.00th=[ 380], 00:24:57.924 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 409], 00:24:57.924 | 99.00th=[ 464], 99.50th=[ 472], 99.90th=[ 489], 99.95th=[ 489], 00:24:57.924 | 99.99th=[ 489] 00:24:57.924 bw ( KiB/s): min= 128, max= 384, per=4.58%, avg=191.20, stdev=74.59, samples=20 00:24:57.924 iops : min= 32, max= 96, avg=47.80, stdev=18.65, samples=20 00:24:57.924 lat (msec) : 50=3.33%, 100=3.33%, 250=2.92%, 500=90.42% 00:24:57.924 cpu : usr=98.05%, sys=1.27%, ctx=42, majf=0, minf=28 00:24:57.924 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:24:57.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.924 filename0: (groupid=0, jobs=1): err= 0: pid=4098348: Wed May 15 01:01:43 2024 00:24:57.924 read: IOPS=39, BW=160KiB/s (163kB/s)(1600KiB/10022msec) 00:24:57.924 slat (usec): min=13, max=229, avg=91.71, stdev=45.74 00:24:57.924 clat (msec): min=208, max=781, avg=400.07, stdev=67.55 00:24:57.924 lat (msec): min=208, max=781, avg=400.17, stdev=67.55 00:24:57.924 clat percentiles (msec): 00:24:57.924 | 1.00th=[ 251], 5.00th=[ 342], 10.00th=[ 359], 20.00th=[ 372], 00:24:57.924 | 30.00th=[ 384], 40.00th=[ 384], 50.00th=[ 393], 60.00th=[ 397], 00:24:57.924 | 70.00th=[ 401], 80.00th=[ 405], 90.00th=[ 409], 95.00th=[ 456], 00:24:57.924 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 785], 99.95th=[ 785], 00:24:57.924 | 99.99th=[ 785] 00:24:57.924 bw ( KiB/s): min= 127, max= 256, per=3.86%, avg=161.63, stdev=57.94, samples=19 00:24:57.924 iops : min= 31, max= 64, avg=40.37, stdev=14.51, samples=19 00:24:57.924 lat (msec) : 250=1.00%, 500=94.50%, 750=4.00%, 1000=0.50% 00:24:57.924 cpu : usr=97.33%, sys=1.56%, ctx=110, majf=0, minf=25 00:24:57.924 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:24:57.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.924 filename1: (groupid=0, jobs=1): err= 0: pid=4098349: Wed May 15 01:01:43 2024 00:24:57.924 read: IOPS=47, BW=189KiB/s (194kB/s)(1920KiB/10143msec) 00:24:57.924 slat (usec): min=8, max=204, avg=30.33, stdev=19.52 00:24:57.924 clat (msec): min=45, max=503, avg=335.53, stdev=88.96 00:24:57.924 lat (msec): min=45, max=503, avg=335.57, stdev=88.96 00:24:57.924 clat percentiles (msec): 00:24:57.924 | 1.00th=[ 46], 5.00th=[ 104], 10.00th=[ 190], 20.00th=[ 271], 00:24:57.924 | 30.00th=[ 313], 40.00th=[ 359], 50.00th=[ 372], 60.00th=[ 384], 00:24:57.924 | 70.00th=[ 393], 80.00th=[ 397], 90.00th=[ 409], 95.00th=[ 409], 00:24:57.924 | 99.00th=[ 414], 99.50th=[ 477], 99.90th=[ 506], 99.95th=[ 506], 00:24:57.924 | 99.99th=[ 506] 00:24:57.924 bw ( KiB/s): min= 127, max= 368, per=4.44%, avg=185.55, stdev=72.23, samples=20 00:24:57.924 iops : min= 31, max= 92, avg=46.35, stdev=18.09, samples=20 00:24:57.924 lat (msec) : 50=2.92%, 250=7.50%, 500=89.17%, 750=0.42% 00:24:57.924 cpu : usr=96.76%, sys=1.80%, ctx=33, majf=0, minf=27 00:24:57.924 IO depths : 1=4.4%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:24:57.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.924 filename1: (groupid=0, jobs=1): err= 0: pid=4098350: Wed May 15 01:01:43 2024 00:24:57.924 read: IOPS=63, BW=252KiB/s (258kB/s)(2560KiB/10149msec) 00:24:57.924 slat (usec): min=4, max=163, avg=16.30, stdev= 8.82 00:24:57.924 clat (msec): min=30, max=380, avg=251.86, stdev=61.04 00:24:57.924 lat (msec): min=30, max=380, avg=251.88, stdev=61.04 00:24:57.924 clat percentiles (msec): 00:24:57.924 | 1.00th=[ 31], 5.00th=[ 87], 10.00th=[ 197], 20.00th=[ 207], 00:24:57.924 | 30.00th=[ 236], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 268], 00:24:57.924 | 70.00th=[ 288], 80.00th=[ 305], 90.00th=[ 309], 95.00th=[ 313], 00:24:57.924 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:24:57.924 | 99.99th=[ 380] 00:24:57.924 bw ( KiB/s): min= 128, max= 384, per=5.98%, avg=249.50, stdev=48.53, samples=20 00:24:57.924 iops : min= 32, max= 96, avg=62.30, stdev=12.14, samples=20 00:24:57.924 lat (msec) : 50=2.50%, 100=2.50%, 250=39.06%, 500=55.94% 00:24:57.924 cpu : usr=97.41%, sys=1.62%, ctx=53, majf=0, minf=34 00:24:57.924 IO depths : 1=0.6%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:24:57.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.924 filename1: (groupid=0, jobs=1): err= 0: pid=4098351: Wed May 15 01:01:43 2024 00:24:57.924 read: IOPS=42, BW=171KiB/s (175kB/s)(1728KiB/10127msec) 00:24:57.924 slat (usec): min=6, max=290, avg=117.20, stdev=28.87 00:24:57.924 clat (msec): min=143, max=468, avg=374.05, stdev=61.36 00:24:57.924 lat (msec): min=143, max=468, avg=374.16, stdev=61.37 00:24:57.924 clat percentiles (msec): 00:24:57.924 | 1.00th=[ 144], 5.00th=[ 205], 10.00th=[ 351], 20.00th=[ 372], 00:24:57.924 | 30.00th=[ 376], 40.00th=[ 384], 50.00th=[ 388], 60.00th=[ 388], 00:24:57.924 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 414], 95.00th=[ 422], 00:24:57.924 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 468], 99.95th=[ 468], 00:24:57.924 | 99.99th=[ 468] 00:24:57.924 bw ( KiB/s): min= 127, max= 256, per=3.98%, avg=166.30, stdev=58.55, samples=20 00:24:57.924 iops : min= 31, max= 64, avg=41.50, stdev=14.61, samples=20 00:24:57.924 lat (msec) : 250=7.41%, 500=92.59% 00:24:57.924 cpu : usr=97.15%, sys=1.59%, ctx=71, majf=0, minf=37 00:24:57.924 IO depths : 1=4.4%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:24:57.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.924 filename1: (groupid=0, jobs=1): err= 0: pid=4098352: Wed May 15 01:01:43 2024 00:24:57.924 read: IOPS=39, BW=159KiB/s (163kB/s)(1600KiB/10038msec) 00:24:57.924 slat (usec): min=9, max=154, avg=90.93, stdev=41.39 00:24:57.924 clat (msec): min=208, max=698, avg=400.71, stdev=68.23 00:24:57.924 lat (msec): min=208, max=698, avg=400.80, stdev=68.22 00:24:57.924 clat percentiles (msec): 00:24:57.924 | 1.00th=[ 313], 5.00th=[ 342], 10.00th=[ 359], 20.00th=[ 372], 00:24:57.924 | 30.00th=[ 380], 40.00th=[ 384], 50.00th=[ 393], 60.00th=[ 397], 00:24:57.924 | 70.00th=[ 397], 80.00th=[ 405], 90.00th=[ 414], 95.00th=[ 456], 00:24:57.924 | 99.00th=[ 701], 99.50th=[ 701], 99.90th=[ 701], 99.95th=[ 701], 00:24:57.924 | 99.99th=[ 701] 00:24:57.924 bw ( KiB/s): min= 127, max= 256, per=3.86%, avg=161.58, stdev=57.85, samples=19 00:24:57.924 iops : min= 31, max= 64, avg=40.32, stdev=14.42, samples=19 00:24:57.924 lat (msec) : 250=0.50%, 500=94.50%, 750=5.00% 00:24:57.924 cpu : usr=98.72%, sys=0.88%, ctx=16, majf=0, minf=22 00:24:57.924 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:24:57.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.924 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.924 filename1: (groupid=0, jobs=1): err= 0: pid=4098353: Wed May 15 01:01:43 2024 00:24:57.924 read: IOPS=41, BW=164KiB/s (168kB/s)(1664KiB/10124msec) 00:24:57.924 slat (usec): min=11, max=143, avg=54.04, stdev=36.10 00:24:57.924 clat (msec): min=237, max=568, avg=388.92, stdev=37.77 00:24:57.924 lat (msec): min=237, max=568, avg=388.97, stdev=37.77 00:24:57.924 clat percentiles (msec): 00:24:57.924 | 1.00th=[ 249], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 368], 00:24:57.924 | 30.00th=[ 376], 40.00th=[ 384], 50.00th=[ 388], 60.00th=[ 393], 00:24:57.924 | 70.00th=[ 397], 80.00th=[ 401], 90.00th=[ 422], 95.00th=[ 468], 00:24:57.924 | 99.00th=[ 542], 99.50th=[ 558], 99.90th=[ 567], 99.95th=[ 567], 00:24:57.924 | 99.99th=[ 567] 00:24:57.924 bw ( KiB/s): min= 127, max= 256, per=3.82%, avg=159.90, stdev=55.12, samples=20 00:24:57.924 iops : min= 31, max= 64, avg=39.90, stdev=13.74, samples=20 00:24:57.924 lat (msec) : 250=1.44%, 500=96.63%, 750=1.92% 00:24:57.924 cpu : usr=98.71%, sys=0.86%, ctx=55, majf=0, minf=19 00:24:57.924 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:24:57.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.925 filename1: (groupid=0, jobs=1): err= 0: pid=4098354: Wed May 15 01:01:43 2024 00:24:57.925 read: IOPS=43, BW=172KiB/s (176kB/s)(1728KiB/10046msec) 00:24:57.925 slat (usec): min=10, max=176, avg=62.15, stdev=43.42 00:24:57.925 clat (msec): min=206, max=466, avg=371.52, stdev=57.53 00:24:57.925 lat (msec): min=206, max=466, avg=371.58, stdev=57.55 00:24:57.925 clat percentiles (msec): 00:24:57.925 | 1.00th=[ 207], 5.00th=[ 226], 10.00th=[ 264], 20.00th=[ 363], 00:24:57.925 | 30.00th=[ 372], 40.00th=[ 380], 50.00th=[ 393], 60.00th=[ 397], 00:24:57.925 | 70.00th=[ 397], 80.00th=[ 401], 90.00th=[ 422], 95.00th=[ 422], 00:24:57.925 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:24:57.925 | 99.99th=[ 468] 00:24:57.925 bw ( KiB/s): min= 127, max= 256, per=3.98%, avg=166.30, stdev=57.03, samples=20 00:24:57.925 iops : min= 31, max= 64, avg=41.50, stdev=14.31, samples=20 00:24:57.925 lat (msec) : 250=7.41%, 500=92.59% 00:24:57.925 cpu : usr=97.94%, sys=1.26%, ctx=71, majf=0, minf=17 00:24:57.925 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:24:57.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.925 filename1: (groupid=0, jobs=1): err= 0: pid=4098355: Wed May 15 01:01:43 2024 00:24:57.925 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10106msec) 00:24:57.925 slat (usec): min=16, max=139, avg=29.13, stdev=16.84 00:24:57.925 clat (msec): min=146, max=687, avg=388.44, stdev=82.72 00:24:57.925 lat (msec): min=146, max=687, avg=388.47, stdev=82.72 00:24:57.925 clat percentiles (msec): 00:24:57.925 | 1.00th=[ 146], 5.00th=[ 266], 10.00th=[ 351], 20.00th=[ 376], 00:24:57.925 | 30.00th=[ 376], 40.00th=[ 388], 50.00th=[ 388], 60.00th=[ 393], 00:24:57.925 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 414], 95.00th=[ 451], 00:24:57.925 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:24:57.925 | 99.99th=[ 684] 00:24:57.925 bw ( KiB/s): min= 128, max= 256, per=4.03%, avg=168.42, stdev=61.13, samples=19 00:24:57.925 iops : min= 32, max= 64, avg=42.11, stdev=15.28, samples=19 00:24:57.925 lat (msec) : 250=3.85%, 500=91.83%, 750=4.33% 00:24:57.925 cpu : usr=98.24%, sys=1.36%, ctx=45, majf=0, minf=20 00:24:57.925 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:24:57.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.925 filename1: (groupid=0, jobs=1): err= 0: pid=4098356: Wed May 15 01:01:43 2024 00:24:57.925 read: IOPS=39, BW=158KiB/s (162kB/s)(1600KiB/10097msec) 00:24:57.925 slat (usec): min=16, max=142, avg=62.22, stdev=38.82 00:24:57.925 clat (msec): min=190, max=681, avg=400.75, stdev=68.25 00:24:57.925 lat (msec): min=190, max=681, avg=400.81, stdev=68.25 00:24:57.925 clat percentiles (msec): 00:24:57.925 | 1.00th=[ 207], 5.00th=[ 342], 10.00th=[ 359], 20.00th=[ 372], 00:24:57.925 | 30.00th=[ 384], 40.00th=[ 384], 50.00th=[ 393], 60.00th=[ 397], 00:24:57.925 | 70.00th=[ 401], 80.00th=[ 405], 90.00th=[ 414], 95.00th=[ 542], 00:24:57.925 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:24:57.925 | 99.99th=[ 684] 00:24:57.925 bw ( KiB/s): min= 128, max= 256, per=3.86%, avg=161.68, stdev=52.50, samples=19 00:24:57.925 iops : min= 32, max= 64, avg=40.42, stdev=13.12, samples=19 00:24:57.925 lat (msec) : 250=1.00%, 500=93.50%, 750=5.50% 00:24:57.925 cpu : usr=98.64%, sys=0.95%, ctx=14, majf=0, minf=22 00:24:57.925 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:24:57.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.925 filename2: (groupid=0, jobs=1): err= 0: pid=4098357: Wed May 15 01:01:43 2024 00:24:57.925 read: IOPS=40, BW=164KiB/s (168kB/s)(1656KiB/10118msec) 00:24:57.925 slat (usec): min=10, max=174, avg=94.86, stdev=40.49 00:24:57.925 clat (msec): min=146, max=698, avg=389.87, stdev=67.72 00:24:57.925 lat (msec): min=146, max=698, avg=389.97, stdev=67.72 00:24:57.925 clat percentiles (msec): 00:24:57.925 | 1.00th=[ 146], 5.00th=[ 292], 10.00th=[ 351], 20.00th=[ 372], 00:24:57.925 | 30.00th=[ 380], 40.00th=[ 388], 50.00th=[ 388], 60.00th=[ 393], 00:24:57.925 | 70.00th=[ 401], 80.00th=[ 414], 90.00th=[ 451], 95.00th=[ 489], 00:24:57.925 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 701], 99.95th=[ 701], 00:24:57.925 | 99.99th=[ 701] 00:24:57.925 bw ( KiB/s): min= 127, max= 256, per=4.01%, avg=167.47, stdev=58.35, samples=19 00:24:57.925 iops : min= 31, max= 64, avg=41.79, stdev=14.65, samples=19 00:24:57.925 lat (msec) : 250=3.38%, 500=91.79%, 750=4.83% 00:24:57.925 cpu : usr=98.15%, sys=1.28%, ctx=33, majf=0, minf=32 00:24:57.925 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:24:57.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.925 filename2: (groupid=0, jobs=1): err= 0: pid=4098358: Wed May 15 01:01:43 2024 00:24:57.925 read: IOPS=45, BW=183KiB/s (188kB/s)(1856KiB/10124msec) 00:24:57.925 slat (usec): min=8, max=259, avg=68.29, stdev=47.28 00:24:57.925 clat (msec): min=144, max=564, avg=348.51, stdev=75.12 00:24:57.925 lat (msec): min=144, max=565, avg=348.58, stdev=75.14 00:24:57.925 clat percentiles (msec): 00:24:57.925 | 1.00th=[ 146], 5.00th=[ 205], 10.00th=[ 251], 20.00th=[ 257], 00:24:57.925 | 30.00th=[ 347], 40.00th=[ 368], 50.00th=[ 376], 60.00th=[ 380], 00:24:57.925 | 70.00th=[ 388], 80.00th=[ 397], 90.00th=[ 401], 95.00th=[ 468], 00:24:57.925 | 99.00th=[ 527], 99.50th=[ 535], 99.90th=[ 567], 99.95th=[ 567], 00:24:57.925 | 99.99th=[ 567] 00:24:57.925 bw ( KiB/s): min= 128, max= 256, per=4.30%, avg=179.10, stdev=59.66, samples=20 00:24:57.925 iops : min= 32, max= 64, avg=44.70, stdev=14.83, samples=20 00:24:57.925 lat (msec) : 250=8.84%, 500=89.87%, 750=1.29% 00:24:57.925 cpu : usr=97.03%, sys=1.80%, ctx=59, majf=0, minf=30 00:24:57.925 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:24:57.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.925 filename2: (groupid=0, jobs=1): err= 0: pid=4098359: Wed May 15 01:01:43 2024 00:24:57.925 read: IOPS=42, BW=171KiB/s (175kB/s)(1728KiB/10124msec) 00:24:57.925 slat (usec): min=8, max=150, avg=44.45, stdev=38.36 00:24:57.925 clat (msec): min=207, max=554, avg=372.05, stdev=64.69 00:24:57.925 lat (msec): min=207, max=554, avg=372.09, stdev=64.70 00:24:57.925 clat percentiles (msec): 00:24:57.925 | 1.00th=[ 207], 5.00th=[ 232], 10.00th=[ 257], 20.00th=[ 368], 00:24:57.925 | 30.00th=[ 372], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 393], 00:24:57.925 | 70.00th=[ 397], 80.00th=[ 397], 90.00th=[ 414], 95.00th=[ 422], 00:24:57.925 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:24:57.925 | 99.99th=[ 558] 00:24:57.925 bw ( KiB/s): min= 128, max= 256, per=4.20%, avg=175.05, stdev=61.80, samples=19 00:24:57.925 iops : min= 32, max= 64, avg=43.68, stdev=15.42, samples=19 00:24:57.925 lat (msec) : 250=6.94%, 500=89.35%, 750=3.70% 00:24:57.925 cpu : usr=98.58%, sys=1.01%, ctx=39, majf=0, minf=23 00:24:57.925 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:24:57.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.925 filename2: (groupid=0, jobs=1): err= 0: pid=4098360: Wed May 15 01:01:43 2024 00:24:57.925 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10099msec) 00:24:57.925 slat (usec): min=9, max=155, avg=66.41, stdev=47.97 00:24:57.925 clat (msec): min=143, max=681, avg=387.81, stdev=80.48 00:24:57.925 lat (msec): min=143, max=681, avg=387.88, stdev=80.48 00:24:57.925 clat percentiles (msec): 00:24:57.925 | 1.00th=[ 144], 5.00th=[ 313], 10.00th=[ 351], 20.00th=[ 376], 00:24:57.925 | 30.00th=[ 376], 40.00th=[ 380], 50.00th=[ 388], 60.00th=[ 393], 00:24:57.925 | 70.00th=[ 397], 80.00th=[ 401], 90.00th=[ 409], 95.00th=[ 456], 00:24:57.925 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:24:57.925 | 99.99th=[ 684] 00:24:57.925 bw ( KiB/s): min= 128, max= 256, per=4.03%, avg=168.42, stdev=61.13, samples=19 00:24:57.925 iops : min= 32, max= 64, avg=42.11, stdev=15.28, samples=19 00:24:57.925 lat (msec) : 250=4.33%, 500=91.35%, 750=4.33% 00:24:57.925 cpu : usr=98.48%, sys=1.06%, ctx=56, majf=0, minf=25 00:24:57.925 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:24:57.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.925 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.926 filename2: (groupid=0, jobs=1): err= 0: pid=4098361: Wed May 15 01:01:43 2024 00:24:57.926 read: IOPS=39, BW=160KiB/s (163kB/s)(1600KiB/10030msec) 00:24:57.926 slat (usec): min=5, max=147, avg=63.38, stdev=48.60 00:24:57.926 clat (msec): min=211, max=692, avg=400.64, stdev=65.90 00:24:57.926 lat (msec): min=211, max=692, avg=400.70, stdev=65.89 00:24:57.926 clat percentiles (msec): 00:24:57.926 | 1.00th=[ 338], 5.00th=[ 359], 10.00th=[ 359], 20.00th=[ 372], 00:24:57.926 | 30.00th=[ 384], 40.00th=[ 384], 50.00th=[ 393], 60.00th=[ 397], 00:24:57.926 | 70.00th=[ 401], 80.00th=[ 405], 90.00th=[ 409], 95.00th=[ 456], 00:24:57.926 | 99.00th=[ 693], 99.50th=[ 693], 99.90th=[ 693], 99.95th=[ 693], 00:24:57.926 | 99.99th=[ 693] 00:24:57.926 bw ( KiB/s): min= 127, max= 256, per=3.86%, avg=161.63, stdev=57.94, samples=19 00:24:57.926 iops : min= 31, max= 64, avg=40.37, stdev=14.51, samples=19 00:24:57.926 lat (msec) : 250=0.50%, 500=95.00%, 750=4.50% 00:24:57.926 cpu : usr=98.60%, sys=1.03%, ctx=14, majf=0, minf=19 00:24:57.926 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:24:57.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.926 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.926 filename2: (groupid=0, jobs=1): err= 0: pid=4098362: Wed May 15 01:01:43 2024 00:24:57.926 read: IOPS=44, BW=178KiB/s (183kB/s)(1792KiB/10046msec) 00:24:57.926 slat (nsec): min=9095, max=64083, avg=21433.56, stdev=10894.63 00:24:57.926 clat (msec): min=214, max=552, avg=358.59, stdev=55.61 00:24:57.926 lat (msec): min=214, max=552, avg=358.61, stdev=55.61 00:24:57.926 clat percentiles (msec): 00:24:57.926 | 1.00th=[ 236], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 305], 00:24:57.926 | 30.00th=[ 342], 40.00th=[ 372], 50.00th=[ 380], 60.00th=[ 388], 00:24:57.926 | 70.00th=[ 397], 80.00th=[ 397], 90.00th=[ 405], 95.00th=[ 409], 00:24:57.926 | 99.00th=[ 468], 99.50th=[ 472], 99.90th=[ 550], 99.95th=[ 550], 00:24:57.926 | 99.99th=[ 550] 00:24:57.926 bw ( KiB/s): min= 112, max= 256, per=4.13%, avg=172.70, stdev=59.84, samples=20 00:24:57.926 iops : min= 28, max= 64, avg=43.10, stdev=15.01, samples=20 00:24:57.926 lat (msec) : 250=6.47%, 500=93.08%, 750=0.45% 00:24:57.926 cpu : usr=98.42%, sys=1.18%, ctx=20, majf=0, minf=22 00:24:57.926 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:24:57.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.926 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.926 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.926 filename2: (groupid=0, jobs=1): err= 0: pid=4098363: Wed May 15 01:01:43 2024 00:24:57.926 read: IOPS=40, BW=164KiB/s (168kB/s)(1656KiB/10106msec) 00:24:57.926 slat (usec): min=8, max=107, avg=27.50, stdev=14.49 00:24:57.926 clat (msec): min=146, max=687, avg=389.94, stdev=84.23 00:24:57.926 lat (msec): min=146, max=687, avg=389.97, stdev=84.22 00:24:57.926 clat percentiles (msec): 00:24:57.926 | 1.00th=[ 146], 5.00th=[ 266], 10.00th=[ 309], 20.00th=[ 376], 00:24:57.926 | 30.00th=[ 376], 40.00th=[ 388], 50.00th=[ 388], 60.00th=[ 393], 00:24:57.926 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 451], 95.00th=[ 489], 00:24:57.926 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:24:57.926 | 99.99th=[ 684] 00:24:57.926 bw ( KiB/s): min= 128, max= 256, per=4.01%, avg=167.58, stdev=58.27, samples=19 00:24:57.926 iops : min= 32, max= 64, avg=41.89, stdev=14.57, samples=19 00:24:57.926 lat (msec) : 250=3.38%, 500=92.27%, 750=4.35% 00:24:57.926 cpu : usr=98.30%, sys=1.30%, ctx=16, majf=0, minf=25 00:24:57.926 IO depths : 1=3.6%, 2=9.9%, 4=25.1%, 8=52.7%, 16=8.7%, 32=0.0%, >=64=0.0% 00:24:57.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.926 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.926 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.926 filename2: (groupid=0, jobs=1): err= 0: pid=4098364: Wed May 15 01:01:43 2024 00:24:57.926 read: IOPS=41, BW=164KiB/s (168kB/s)(1664KiB/10124msec) 00:24:57.926 slat (usec): min=12, max=135, avg=43.79, stdev=23.05 00:24:57.926 clat (msec): min=227, max=525, avg=386.38, stdev=35.28 00:24:57.926 lat (msec): min=227, max=525, avg=386.42, stdev=35.28 00:24:57.926 clat percentiles (msec): 00:24:57.926 | 1.00th=[ 288], 5.00th=[ 342], 10.00th=[ 359], 20.00th=[ 368], 00:24:57.926 | 30.00th=[ 380], 40.00th=[ 384], 50.00th=[ 388], 60.00th=[ 397], 00:24:57.926 | 70.00th=[ 401], 80.00th=[ 405], 90.00th=[ 409], 95.00th=[ 422], 00:24:57.926 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 527], 99.95th=[ 527], 00:24:57.926 | 99.99th=[ 527] 00:24:57.926 bw ( KiB/s): min= 127, max= 256, per=3.82%, avg=159.90, stdev=53.49, samples=20 00:24:57.926 iops : min= 31, max= 64, avg=39.90, stdev=13.41, samples=20 00:24:57.926 lat (msec) : 250=0.48%, 500=98.08%, 750=1.44% 00:24:57.926 cpu : usr=98.47%, sys=1.13%, ctx=26, majf=0, minf=25 00:24:57.926 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:24:57.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.926 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.926 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:57.926 00:24:57.926 Run status group 0 (all jobs): 00:24:57.926 READ: bw=4166KiB/s (4266kB/s), 158KiB/s-252KiB/s (162kB/s-258kB/s), io=41.3MiB (43.3MB), run=10022-10150msec 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.926 bdev_null0 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.926 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.927 [2024-05-15 01:01:43.514759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.927 bdev_null1 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:57.927 { 00:24:57.927 "params": { 00:24:57.927 "name": "Nvme$subsystem", 00:24:57.927 "trtype": "$TEST_TRANSPORT", 00:24:57.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.927 "adrfam": "ipv4", 00:24:57.927 "trsvcid": "$NVMF_PORT", 00:24:57.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.927 "hdgst": ${hdgst:-false}, 00:24:57.927 "ddgst": ${ddgst:-false} 00:24:57.927 }, 00:24:57.927 "method": "bdev_nvme_attach_controller" 00:24:57.927 } 00:24:57.927 EOF 00:24:57.927 )") 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:57.927 { 00:24:57.927 "params": { 00:24:57.927 "name": "Nvme$subsystem", 00:24:57.927 "trtype": "$TEST_TRANSPORT", 00:24:57.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.927 "adrfam": "ipv4", 00:24:57.927 "trsvcid": "$NVMF_PORT", 00:24:57.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.927 "hdgst": ${hdgst:-false}, 00:24:57.927 "ddgst": ${ddgst:-false} 00:24:57.927 }, 00:24:57.927 "method": "bdev_nvme_attach_controller" 00:24:57.927 } 00:24:57.927 EOF 00:24:57.927 )") 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:57.927 "params": { 00:24:57.927 "name": "Nvme0", 00:24:57.927 "trtype": "tcp", 00:24:57.927 "traddr": "10.0.0.2", 00:24:57.927 "adrfam": "ipv4", 00:24:57.927 "trsvcid": "4420", 00:24:57.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:57.927 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:57.927 "hdgst": false, 00:24:57.927 "ddgst": false 00:24:57.927 }, 00:24:57.927 "method": "bdev_nvme_attach_controller" 00:24:57.927 },{ 00:24:57.927 "params": { 00:24:57.927 "name": "Nvme1", 00:24:57.927 "trtype": "tcp", 00:24:57.927 "traddr": "10.0.0.2", 00:24:57.927 "adrfam": "ipv4", 00:24:57.927 "trsvcid": "4420", 00:24:57.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:57.927 "hdgst": false, 00:24:57.927 "ddgst": false 00:24:57.927 }, 00:24:57.927 "method": "bdev_nvme_attach_controller" 00:24:57.927 }' 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:57.927 01:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:57.927 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:57.927 ... 00:24:57.927 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:57.927 ... 00:24:57.927 fio-3.35 00:24:57.927 Starting 4 threads 00:24:57.927 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.188 00:25:03.188 filename0: (groupid=0, jobs=1): err= 0: pid=4099332: Wed May 15 01:01:49 2024 00:25:03.188 read: IOPS=1642, BW=12.8MiB/s (13.5MB/s)(64.2MiB/5002msec) 00:25:03.188 slat (nsec): min=7861, max=55309, avg=12942.84, stdev=6277.88 00:25:03.188 clat (usec): min=1142, max=8633, avg=4830.28, stdev=836.90 00:25:03.188 lat (usec): min=1154, max=8667, avg=4843.22, stdev=836.50 00:25:03.188 clat percentiles (usec): 00:25:03.188 | 1.00th=[ 3228], 5.00th=[ 3818], 10.00th=[ 4015], 20.00th=[ 4293], 00:25:03.188 | 30.00th=[ 4424], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:25:03.188 | 70.00th=[ 4948], 80.00th=[ 5211], 90.00th=[ 5997], 95.00th=[ 6783], 00:25:03.188 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[ 8160], 99.95th=[ 8291], 00:25:03.188 | 99.99th=[ 8586] 00:25:03.188 bw ( KiB/s): min=12800, max=13424, per=24.16%, avg=13134.40, stdev=257.82, samples=10 00:25:03.188 iops : min= 1600, max= 1678, avg=1641.80, stdev=32.23, samples=10 00:25:03.188 lat (msec) : 2=0.10%, 4=9.28%, 10=90.63% 00:25:03.188 cpu : usr=95.10%, sys=4.46%, ctx=12, majf=0, minf=0 00:25:03.188 IO depths : 1=0.1%, 2=5.5%, 4=66.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:03.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.188 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.188 issued rwts: total=8214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.188 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:03.188 filename0: (groupid=0, jobs=1): err= 0: pid=4099333: Wed May 15 01:01:49 2024 00:25:03.188 read: IOPS=1830, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5004msec) 00:25:03.188 slat (nsec): min=7918, max=68680, avg=12363.78, stdev=5795.88 00:25:03.188 clat (usec): min=850, max=8566, avg=4329.72, stdev=780.79 00:25:03.188 lat (usec): min=863, max=8580, avg=4342.09, stdev=781.15 00:25:03.188 clat percentiles (usec): 00:25:03.188 | 1.00th=[ 2704], 5.00th=[ 3195], 10.00th=[ 3425], 20.00th=[ 3720], 00:25:03.188 | 30.00th=[ 3916], 40.00th=[ 4178], 50.00th=[ 4359], 60.00th=[ 4555], 00:25:03.188 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 5735], 00:25:03.188 | 99.00th=[ 7046], 99.50th=[ 7373], 99.90th=[ 7767], 99.95th=[ 7963], 00:25:03.188 | 99.99th=[ 8586] 00:25:03.188 bw ( KiB/s): min=13792, max=15744, per=26.95%, avg=14651.20, stdev=655.42, samples=10 00:25:03.188 iops : min= 1724, max= 1968, avg=1831.40, stdev=81.93, samples=10 00:25:03.188 lat (usec) : 1000=0.01% 00:25:03.188 lat (msec) : 2=0.07%, 4=33.63%, 10=66.30% 00:25:03.188 cpu : usr=95.00%, sys=4.58%, ctx=7, majf=0, minf=0 00:25:03.188 IO depths : 1=0.1%, 2=7.2%, 4=63.6%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:03.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.188 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.188 issued rwts: total=9162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.188 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:03.188 filename1: (groupid=0, jobs=1): err= 0: pid=4099334: Wed May 15 01:01:49 2024 00:25:03.188 read: IOPS=1694, BW=13.2MiB/s (13.9MB/s)(66.2MiB/5003msec) 00:25:03.188 slat (nsec): min=7891, max=56182, avg=12676.97, stdev=6065.31 00:25:03.188 clat (usec): min=1029, max=8331, avg=4680.91, stdev=825.04 00:25:03.188 lat (usec): min=1041, max=8358, avg=4693.59, stdev=825.07 00:25:03.188 clat percentiles (usec): 00:25:03.188 | 1.00th=[ 3032], 5.00th=[ 3523], 10.00th=[ 3818], 20.00th=[ 4146], 00:25:03.188 | 30.00th=[ 4293], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4686], 00:25:03.188 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5669], 95.00th=[ 6521], 00:25:03.188 | 99.00th=[ 7373], 99.50th=[ 7635], 99.90th=[ 7832], 99.95th=[ 8094], 00:25:03.188 | 99.99th=[ 8356] 00:25:03.188 bw ( KiB/s): min=12857, max=14400, per=24.94%, avg=13556.10, stdev=448.45, samples=10 00:25:03.188 iops : min= 1607, max= 1800, avg=1694.50, stdev=56.08, samples=10 00:25:03.188 lat (msec) : 2=0.08%, 4=15.64%, 10=84.28% 00:25:03.188 cpu : usr=95.00%, sys=4.58%, ctx=7, majf=0, minf=0 00:25:03.188 IO depths : 1=0.1%, 2=4.6%, 4=67.0%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:03.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.188 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.188 issued rwts: total=8479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.188 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:03.188 filename1: (groupid=0, jobs=1): err= 0: pid=4099335: Wed May 15 01:01:49 2024 00:25:03.188 read: IOPS=1628, BW=12.7MiB/s (13.3MB/s)(63.6MiB/5001msec) 00:25:03.188 slat (nsec): min=8498, max=69862, avg=17288.80, stdev=7314.33 00:25:03.188 clat (usec): min=980, max=9335, avg=4858.27, stdev=860.57 00:25:03.188 lat (usec): min=997, max=9360, avg=4875.56, stdev=860.35 00:25:03.188 clat percentiles (usec): 00:25:03.188 | 1.00th=[ 3163], 5.00th=[ 3720], 10.00th=[ 4015], 20.00th=[ 4293], 00:25:03.188 | 30.00th=[ 4424], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4817], 00:25:03.188 | 70.00th=[ 5014], 80.00th=[ 5342], 90.00th=[ 6063], 95.00th=[ 6587], 00:25:03.188 | 99.00th=[ 7635], 99.50th=[ 7898], 99.90th=[ 8717], 99.95th=[ 8979], 00:25:03.188 | 99.99th=[ 9372] 00:25:03.188 bw ( KiB/s): min=12424, max=13568, per=23.95%, avg=13021.60, stdev=390.17, samples=10 00:25:03.188 iops : min= 1553, max= 1696, avg=1627.70, stdev=48.77, samples=10 00:25:03.188 lat (usec) : 1000=0.01% 00:25:03.188 lat (msec) : 2=0.06%, 4=9.87%, 10=90.06% 00:25:03.188 cpu : usr=92.22%, sys=5.66%, ctx=307, majf=0, minf=9 00:25:03.188 IO depths : 1=0.1%, 2=6.4%, 4=64.8%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:03.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.188 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.188 issued rwts: total=8145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.188 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:03.188 00:25:03.188 Run status group 0 (all jobs): 00:25:03.188 READ: bw=53.1MiB/s (55.7MB/s), 12.7MiB/s-14.3MiB/s (13.3MB/s-15.0MB/s), io=266MiB (279MB), run=5001-5004msec 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.188 00:25:03.188 real 0m23.780s 00:25:03.188 user 4m34.121s 00:25:03.188 sys 0m5.854s 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 ************************************ 00:25:03.188 END TEST fio_dif_rand_params 00:25:03.188 ************************************ 00:25:03.188 01:01:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:03.188 01:01:49 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:03.188 01:01:49 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 ************************************ 00:25:03.188 START TEST fio_dif_digest 00:25:03.188 ************************************ 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 bdev_null0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:03.188 [2024-05-15 01:01:49.720570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:25:03.188 01:01:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:03.188 { 00:25:03.188 "params": { 00:25:03.188 "name": "Nvme$subsystem", 00:25:03.188 "trtype": "$TEST_TRANSPORT", 00:25:03.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.188 "adrfam": "ipv4", 00:25:03.188 "trsvcid": "$NVMF_PORT", 00:25:03.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.188 "hdgst": ${hdgst:-false}, 00:25:03.189 "ddgst": ${ddgst:-false} 00:25:03.189 }, 00:25:03.189 "method": "bdev_nvme_attach_controller" 00:25:03.189 } 00:25:03.189 EOF 00:25:03.189 )") 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:03.189 "params": { 00:25:03.189 "name": "Nvme0", 00:25:03.189 "trtype": "tcp", 00:25:03.189 "traddr": "10.0.0.2", 00:25:03.189 "adrfam": "ipv4", 00:25:03.189 "trsvcid": "4420", 00:25:03.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:03.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:03.189 "hdgst": true, 00:25:03.189 "ddgst": true 00:25:03.189 }, 00:25:03.189 "method": "bdev_nvme_attach_controller" 00:25:03.189 }' 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:03.189 01:01:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:03.189 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:03.189 ... 00:25:03.189 fio-3.35 00:25:03.189 Starting 3 threads 00:25:03.189 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.386 00:25:15.386 filename0: (groupid=0, jobs=1): err= 0: pid=4099993: Wed May 15 01:02:00 2024 00:25:15.386 read: IOPS=185, BW=23.1MiB/s (24.3MB/s)(232MiB/10045msec) 00:25:15.386 slat (nsec): min=8318, max=71849, avg=19073.85, stdev=4344.37 00:25:15.386 clat (usec): min=9369, max=58971, avg=16165.25, stdev=3030.23 00:25:15.386 lat (usec): min=9388, max=58989, avg=16184.32, stdev=3030.25 00:25:15.386 clat percentiles (usec): 00:25:15.386 | 1.00th=[11076], 5.00th=[12518], 10.00th=[14222], 20.00th=[15139], 00:25:15.386 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:25:15.386 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[17957], 00:25:15.386 | 99.00th=[19006], 99.50th=[19530], 99.90th=[58459], 99.95th=[58983], 00:25:15.386 | 99.99th=[58983] 00:25:15.386 bw ( KiB/s): min=21504, max=26112, per=34.21%, avg=23769.60, stdev=983.71, samples=20 00:25:15.386 iops : min= 168, max= 204, avg=185.70, stdev= 7.69, samples=20 00:25:15.386 lat (msec) : 10=0.11%, 20=99.41%, 50=0.11%, 100=0.38% 00:25:15.386 cpu : usr=94.98%, sys=4.61%, ctx=21, majf=0, minf=185 00:25:15.386 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.386 issued rwts: total=1859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.386 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:15.386 filename0: (groupid=0, jobs=1): err= 0: pid=4099994: Wed May 15 01:02:00 2024 00:25:15.386 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(224MiB/10047msec) 00:25:15.386 slat (usec): min=6, max=155, avg=19.37, stdev= 6.38 00:25:15.386 clat (msec): min=9, max=101, avg=16.78, stdev= 4.47 00:25:15.386 lat (msec): min=9, max=101, avg=16.80, stdev= 4.47 00:25:15.386 clat percentiles (msec): 00:25:15.386 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 16], 00:25:15.386 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 17], 00:25:15.386 | 70.00th=[ 18], 80.00th=[ 18], 90.00th=[ 19], 95.00th=[ 19], 00:25:15.386 | 99.00th=[ 21], 99.50th=[ 58], 99.90th=[ 100], 99.95th=[ 102], 00:25:15.386 | 99.99th=[ 102] 00:25:15.386 bw ( KiB/s): min=18176, max=24832, per=32.96%, avg=22899.20, stdev=1427.71, samples=20 00:25:15.386 iops : min= 142, max= 194, avg=178.90, stdev=11.15, samples=20 00:25:15.386 lat (msec) : 10=0.61%, 20=97.93%, 50=0.84%, 100=0.56%, 250=0.06% 00:25:15.386 cpu : usr=93.69%, sys=5.24%, ctx=48, majf=0, minf=171 00:25:15.386 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.386 issued rwts: total=1791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.386 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:15.386 filename0: (groupid=0, jobs=1): err= 0: pid=4099995: Wed May 15 01:02:00 2024 00:25:15.386 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(226MiB/10048msec) 00:25:15.386 slat (nsec): min=4871, max=75479, avg=19088.12, stdev=5156.87 00:25:15.386 clat (usec): min=9794, max=61076, avg=16659.74, stdev=5153.38 00:25:15.386 lat (usec): min=9817, max=61091, avg=16678.83, stdev=5153.28 00:25:15.386 clat percentiles (usec): 00:25:15.386 | 1.00th=[10945], 5.00th=[13566], 10.00th=[14484], 20.00th=[15139], 00:25:15.386 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:25:15.386 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[18482], 00:25:15.386 | 99.00th=[57410], 99.50th=[57934], 99.90th=[60556], 99.95th=[61080], 00:25:15.386 | 99.99th=[61080] 00:25:15.386 bw ( KiB/s): min=19200, max=25088, per=33.17%, avg=23047.10, stdev=1386.24, samples=20 00:25:15.386 iops : min= 150, max= 196, avg=180.00, stdev=10.80, samples=20 00:25:15.386 lat (msec) : 10=0.17%, 20=98.17%, 50=0.28%, 100=1.39% 00:25:15.386 cpu : usr=94.81%, sys=4.69%, ctx=31, majf=0, minf=105 00:25:15.386 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.387 issued rwts: total=1804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:15.387 00:25:15.387 Run status group 0 (all jobs): 00:25:15.387 READ: bw=67.8MiB/s (71.1MB/s), 22.3MiB/s-23.1MiB/s (23.4MB/s-24.3MB/s), io=682MiB (715MB), run=10045-10048msec 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.387 00:25:15.387 real 0m11.083s 00:25:15.387 user 0m29.366s 00:25:15.387 sys 0m1.712s 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:15.387 01:02:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 ************************************ 00:25:15.387 END TEST fio_dif_digest 00:25:15.387 ************************************ 00:25:15.387 01:02:00 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:15.387 01:02:00 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:15.387 rmmod nvme_tcp 00:25:15.387 rmmod nvme_fabrics 00:25:15.387 rmmod nvme_keyring 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 4095297 ']' 00:25:15.387 01:02:00 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 4095297 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 4095297 ']' 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 4095297 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4095297 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4095297' 00:25:15.387 killing process with pid 4095297 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@965 -- # kill 4095297 00:25:15.387 [2024-05-15 01:02:00.860446] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:15.387 01:02:00 nvmf_dif -- common/autotest_common.sh@970 -- # wait 4095297 00:25:15.387 01:02:01 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:25:15.387 01:02:01 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:15.387 Waiting for block devices as requested 00:25:15.387 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:25:15.387 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:25:15.387 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:25:15.387 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:25:15.387 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:25:15.387 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:25:15.387 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:25:15.387 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:25:15.644 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:25:15.644 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:25:15.644 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:25:15.644 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:25:15.945 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:25:15.945 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:25:15.945 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:25:15.945 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:25:16.204 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:25:16.204 01:02:03 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:16.204 01:02:03 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:16.204 01:02:03 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.204 01:02:03 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:16.204 01:02:03 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.204 01:02:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:16.204 01:02:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.163 01:02:05 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:18.163 00:25:18.163 real 1m5.090s 00:25:18.163 user 6m29.813s 00:25:18.163 sys 0m16.079s 00:25:18.163 01:02:05 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:18.163 01:02:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:18.163 ************************************ 00:25:18.163 END TEST nvmf_dif 00:25:18.163 ************************************ 00:25:18.163 01:02:05 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:18.163 01:02:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:18.163 01:02:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:18.163 01:02:05 -- common/autotest_common.sh@10 -- # set +x 00:25:18.163 ************************************ 00:25:18.163 START TEST nvmf_abort_qd_sizes 00:25:18.163 ************************************ 00:25:18.163 01:02:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:18.422 * Looking for test storage... 00:25:18.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:18.422 01:02:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.422 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:18.422 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.422 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.422 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.422 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:25:18.423 01:02:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:19.799 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:19.800 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:19.800 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:19.800 Found net devices under 0000:08:00.0: cvl_0_0 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:19.800 Found net devices under 0000:08:00.1: cvl_0_1 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.800 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:20.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:25:20.058 00:25:20.058 --- 10.0.0.2 ping statistics --- 00:25:20.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.058 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:25:20.058 00:25:20.058 --- 10.0.0.1 ping statistics --- 00:25:20.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.058 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:20.058 01:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:20.991 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:25:20.991 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:25:20.991 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:25:20.991 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:25:20.991 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:25:20.991 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:25:20.991 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:25:20.991 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:25:20.991 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:25:20.991 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:25:20.991 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:25:20.991 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:25:20.991 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:25:20.991 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:25:20.991 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:25:20.991 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:25:21.938 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=4103715 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 4103715 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 4103715 ']' 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:21.938 01:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:22.195 [2024-05-15 01:02:09.032980] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:25:22.195 [2024-05-15 01:02:09.033076] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.195 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.195 [2024-05-15 01:02:09.098290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:22.195 [2024-05-15 01:02:09.216840] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.195 [2024-05-15 01:02:09.216901] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.195 [2024-05-15 01:02:09.216918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.195 [2024-05-15 01:02:09.216941] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.195 [2024-05-15 01:02:09.216954] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.195 [2024-05-15 01:02:09.217015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.195 [2024-05-15 01:02:09.217105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.195 [2024-05-15 01:02:09.217185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.195 [2024-05-15 01:02:09.217189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:22.452 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:22.453 01:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:22.453 ************************************ 00:25:22.453 START TEST spdk_target_abort 00:25:22.453 ************************************ 00:25:22.453 01:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:25:22.453 01:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:22.453 01:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:25:22.453 01:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.453 01:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:25.734 spdk_targetn1 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:25.734 [2024-05-15 01:02:12.222951] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.734 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:25.735 [2024-05-15 01:02:12.254961] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:25.735 [2024-05-15 01:02:12.255277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:25.735 01:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:25.735 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.015 Initializing NVMe Controllers 00:25:29.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:29.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:29.015 Initialization complete. Launching workers. 00:25:29.015 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10619, failed: 0 00:25:29.015 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1208, failed to submit 9411 00:25:29.015 success 836, unsuccess 372, failed 0 00:25:29.015 01:02:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:29.015 01:02:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:29.015 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.296 [2024-05-15 01:02:18.738986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646b40 is same with the state(5) to be set 00:25:32.296 [2024-05-15 01:02:18.739049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646b40 is same with the state(5) to be set 00:25:32.296 [2024-05-15 01:02:18.739065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646b40 is same with the state(5) to be set 00:25:32.296 Initializing NVMe Controllers 00:25:32.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:32.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:32.296 Initialization complete. Launching workers. 00:25:32.296 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8699, failed: 0 00:25:32.296 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7472 00:25:32.296 success 336, unsuccess 891, failed 0 00:25:32.296 01:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:32.296 01:02:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:32.297 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.578 Initializing NVMe Controllers 00:25:35.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:35.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:35.578 Initialization complete. Launching workers. 00:25:35.578 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29591, failed: 0 00:25:35.578 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2682, failed to submit 26909 00:25:35.578 success 447, unsuccess 2235, failed 0 00:25:35.578 01:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:25:35.578 01:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.578 01:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:35.578 01:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.578 01:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:25:35.578 01:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.578 01:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4103715 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 4103715 ']' 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 4103715 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4103715 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4103715' 00:25:36.513 killing process with pid 4103715 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 4103715 00:25:36.513 [2024-05-15 01:02:23.516180] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:36.513 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 4103715 00:25:36.771 00:25:36.771 real 0m14.332s 00:25:36.771 user 0m53.387s 00:25:36.771 sys 0m2.805s 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:36.771 ************************************ 00:25:36.771 END TEST spdk_target_abort 00:25:36.771 ************************************ 00:25:36.771 01:02:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:25:36.771 01:02:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:36.771 01:02:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:36.771 01:02:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:36.771 ************************************ 00:25:36.771 START TEST kernel_target_abort 00:25:36.771 ************************************ 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:36.771 01:02:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:37.709 Waiting for block devices as requested 00:25:37.709 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:25:37.969 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:25:37.969 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:25:37.969 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:25:37.969 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:25:38.230 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:25:38.230 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:25:38.230 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:25:38.230 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:25:38.489 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:25:38.489 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:25:38.489 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:25:38.489 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:25:38.748 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:25:38.748 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:25:38.748 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:25:38.748 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:25:38.748 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:38.748 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:38.748 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:38.748 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:25:38.748 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:38.748 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:38.748 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:38.748 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:38.748 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:39.007 No valid GPT data, bailing 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:25:39.007 00:25:39.007 Discovery Log Number of Records 2, Generation counter 2 00:25:39.007 =====Discovery Log Entry 0====== 00:25:39.007 trtype: tcp 00:25:39.007 adrfam: ipv4 00:25:39.007 subtype: current discovery subsystem 00:25:39.007 treq: not specified, sq flow control disable supported 00:25:39.007 portid: 1 00:25:39.007 trsvcid: 4420 00:25:39.007 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:39.007 traddr: 10.0.0.1 00:25:39.007 eflags: none 00:25:39.007 sectype: none 00:25:39.007 =====Discovery Log Entry 1====== 00:25:39.007 trtype: tcp 00:25:39.007 adrfam: ipv4 00:25:39.007 subtype: nvme subsystem 00:25:39.007 treq: not specified, sq flow control disable supported 00:25:39.007 portid: 1 00:25:39.007 trsvcid: 4420 00:25:39.007 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:39.007 traddr: 10.0.0.1 00:25:39.007 eflags: none 00:25:39.007 sectype: none 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:39.007 01:02:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:39.007 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.288 Initializing NVMe Controllers 00:25:42.288 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:42.288 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:42.288 Initialization complete. Launching workers. 00:25:42.288 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31980, failed: 0 00:25:42.289 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31980, failed to submit 0 00:25:42.289 success 0, unsuccess 31980, failed 0 00:25:42.289 01:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:42.289 01:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:42.289 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.570 Initializing NVMe Controllers 00:25:45.570 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:45.570 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:45.570 Initialization complete. Launching workers. 00:25:45.570 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62620, failed: 0 00:25:45.570 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15790, failed to submit 46830 00:25:45.570 success 0, unsuccess 15790, failed 0 00:25:45.570 01:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:45.570 01:02:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:45.570 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.098 Initializing NVMe Controllers 00:25:48.098 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:48.098 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:48.098 Initialization complete. Launching workers. 00:25:48.098 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61521, failed: 0 00:25:48.098 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15346, failed to submit 46175 00:25:48.098 success 0, unsuccess 15346, failed 0 00:25:48.098 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:48.098 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:48.098 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:25:48.098 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.098 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:48.098 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:48.098 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.098 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:48.098 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:48.355 01:02:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:49.291 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:25:49.291 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:25:49.291 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:25:49.291 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:25:49.291 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:25:49.291 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:25:49.291 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:25:49.291 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:25:49.291 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:25:49.291 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:25:49.291 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:25:49.291 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:25:49.291 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:25:49.291 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:25:49.291 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:25:49.291 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:25:50.229 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:25:50.229 00:25:50.229 real 0m13.415s 00:25:50.229 user 0m5.146s 00:25:50.229 sys 0m2.996s 00:25:50.229 01:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:50.229 01:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:50.229 ************************************ 00:25:50.229 END TEST kernel_target_abort 00:25:50.229 ************************************ 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.229 rmmod nvme_tcp 00:25:50.229 rmmod nvme_fabrics 00:25:50.229 rmmod nvme_keyring 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 4103715 ']' 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 4103715 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 4103715 ']' 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 4103715 00:25:50.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4103715) - No such process 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 4103715 is not found' 00:25:50.229 Process with pid 4103715 is not found 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:25:50.229 01:02:37 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:51.167 Waiting for block devices as requested 00:25:51.167 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:25:51.167 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:25:51.427 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:25:51.427 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:25:51.427 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:25:51.427 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:25:51.685 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:25:51.685 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:25:51.685 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:25:51.685 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:25:51.943 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:25:51.943 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:25:51.943 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:25:52.200 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:25:52.200 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:25:52.200 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:25:52.201 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:25:52.460 01:02:39 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:52.460 01:02:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:52.460 01:02:39 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:52.460 01:02:39 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:52.460 01:02:39 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.460 01:02:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:52.460 01:02:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.420 01:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:54.420 00:25:54.420 real 0m36.146s 00:25:54.420 user 1m0.259s 00:25:54.420 sys 0m8.601s 00:25:54.420 01:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:54.420 01:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:54.420 ************************************ 00:25:54.420 END TEST nvmf_abort_qd_sizes 00:25:54.420 ************************************ 00:25:54.420 01:02:41 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:25:54.420 01:02:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:54.420 01:02:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:54.420 01:02:41 -- common/autotest_common.sh@10 -- # set +x 00:25:54.420 ************************************ 00:25:54.420 START TEST keyring_file 00:25:54.420 ************************************ 00:25:54.420 01:02:41 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:25:54.420 * Looking for test storage... 00:25:54.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:25:54.420 01:02:41 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:25:54.420 01:02:41 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.420 01:02:41 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.420 01:02:41 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.420 01:02:41 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.420 01:02:41 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.420 01:02:41 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.420 01:02:41 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.420 01:02:41 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:54.420 01:02:41 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@47 -- # : 0 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:54.420 01:02:41 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:54.420 01:02:41 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:54.420 01:02:41 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:54.420 01:02:41 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:54.420 01:02:41 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:54.420 01:02:41 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:54.420 01:02:41 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:54.420 01:02:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:54.420 01:02:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:54.420 01:02:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:54.420 01:02:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:54.420 01:02:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:54.420 01:02:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IWOpr6u02E 00:25:54.420 01:02:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:54.420 01:02:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IWOpr6u02E 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IWOpr6u02E 00:25:54.679 01:02:41 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.IWOpr6u02E 00:25:54.679 01:02:41 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gUXHJ6AVmA 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:54.679 01:02:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:54.679 01:02:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:54.679 01:02:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:54.679 01:02:41 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:25:54.679 01:02:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:54.679 01:02:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gUXHJ6AVmA 00:25:54.679 01:02:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gUXHJ6AVmA 00:25:54.679 01:02:41 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.gUXHJ6AVmA 00:25:54.679 01:02:41 keyring_file -- keyring/file.sh@30 -- # tgtpid=4108145 00:25:54.679 01:02:41 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:25:54.679 01:02:41 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4108145 00:25:54.679 01:02:41 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 4108145 ']' 00:25:54.679 01:02:41 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.679 01:02:41 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:54.679 01:02:41 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.679 01:02:41 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:54.679 01:02:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:54.679 [2024-05-15 01:02:41.622435] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:25:54.679 [2024-05-15 01:02:41.622518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108145 ] 00:25:54.679 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.679 [2024-05-15 01:02:41.681871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.938 [2024-05-15 01:02:41.798244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:25:55.197 01:02:42 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:55.197 [2024-05-15 01:02:42.032201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.197 null0 00:25:55.197 [2024-05-15 01:02:42.064213] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:55.197 [2024-05-15 01:02:42.064289] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:55.197 [2024-05-15 01:02:42.064607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:55.197 [2024-05-15 01:02:42.072256] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.197 01:02:42 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:55.197 [2024-05-15 01:02:42.084288] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:55.197 request: 00:25:55.197 { 00:25:55.197 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:55.197 "secure_channel": false, 00:25:55.197 "listen_address": { 00:25:55.197 "trtype": "tcp", 00:25:55.197 "traddr": "127.0.0.1", 00:25:55.197 "trsvcid": "4420" 00:25:55.197 }, 00:25:55.197 "method": "nvmf_subsystem_add_listener", 00:25:55.197 "req_id": 1 00:25:55.197 } 00:25:55.197 Got JSON-RPC error response 00:25:55.197 response: 00:25:55.197 { 00:25:55.197 "code": -32602, 00:25:55.197 "message": "Invalid parameters" 00:25:55.197 } 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:55.197 01:02:42 keyring_file -- keyring/file.sh@46 -- # bperfpid=4108239 00:25:55.197 01:02:42 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:55.197 01:02:42 keyring_file -- keyring/file.sh@48 -- # waitforlisten 4108239 /var/tmp/bperf.sock 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 4108239 ']' 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:55.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:55.197 01:02:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:55.197 [2024-05-15 01:02:42.134383] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:25:55.197 [2024-05-15 01:02:42.134477] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108239 ] 00:25:55.197 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.197 [2024-05-15 01:02:42.193369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.455 [2024-05-15 01:02:42.310757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.455 01:02:42 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:55.455 01:02:42 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:25:55.455 01:02:42 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IWOpr6u02E 00:25:55.455 01:02:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IWOpr6u02E 00:25:55.713 01:02:42 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gUXHJ6AVmA 00:25:55.713 01:02:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gUXHJ6AVmA 00:25:55.971 01:02:42 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:25:55.971 01:02:42 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:25:55.971 01:02:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:55.971 01:02:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:55.971 01:02:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:56.229 01:02:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.IWOpr6u02E == \/\t\m\p\/\t\m\p\.\I\W\O\p\r\6\u\0\2\E ]] 00:25:56.229 01:02:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:56.229 01:02:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:25:56.229 01:02:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:56.229 01:02:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:56.229 01:02:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:56.487 01:02:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.gUXHJ6AVmA == \/\t\m\p\/\t\m\p\.\g\U\X\H\J\6\A\V\m\A ]] 00:25:56.487 01:02:43 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:25:56.487 01:02:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:56.487 01:02:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:56.487 01:02:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:56.487 01:02:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:56.487 01:02:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:56.745 01:02:43 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:25:56.745 01:02:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:25:56.745 01:02:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:56.745 01:02:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:56.745 01:02:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:56.745 01:02:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:56.745 01:02:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:57.003 01:02:43 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:57.003 01:02:43 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:57.003 01:02:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:57.261 [2024-05-15 01:02:44.133887] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:57.261 nvme0n1 00:25:57.261 01:02:44 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:25:57.261 01:02:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:57.261 01:02:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:57.261 01:02:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:57.261 01:02:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:57.261 01:02:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:57.519 01:02:44 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:25:57.519 01:02:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:25:57.519 01:02:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:57.519 01:02:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:57.519 01:02:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:57.519 01:02:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:57.519 01:02:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:57.777 01:02:44 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:25:57.777 01:02:44 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.777 Running I/O for 1 seconds... 00:25:59.152 00:25:59.152 Latency(us) 00:25:59.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.152 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:59.152 nvme0n1 : 1.03 4443.87 17.36 0.00 0.00 28444.87 6310.87 30874.74 00:25:59.152 =================================================================================================================== 00:25:59.152 Total : 4443.87 17.36 0.00 0.00 28444.87 6310.87 30874.74 00:25:59.152 0 00:25:59.152 01:02:45 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:59.152 01:02:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:59.152 01:02:46 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:25:59.152 01:02:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:59.152 01:02:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:59.152 01:02:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:59.152 01:02:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:59.152 01:02:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:59.410 01:02:46 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:25:59.410 01:02:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:25:59.410 01:02:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:59.410 01:02:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:59.410 01:02:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:59.410 01:02:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:59.410 01:02:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:59.976 01:02:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:59.976 01:02:46 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:59.976 01:02:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:59.976 01:02:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:59.976 01:02:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:59.976 01:02:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:59.976 01:02:46 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:59.976 01:02:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:59.976 01:02:46 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:59.976 01:02:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:59.976 [2024-05-15 01:02:47.035487] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:59.976 [2024-05-15 01:02:47.036016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7b4f0 (107): Transport endpoint is not connected 00:26:00.234 [2024-05-15 01:02:47.037006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7b4f0 (9): Bad file descriptor 00:26:00.234 [2024-05-15 01:02:47.038005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:00.234 [2024-05-15 01:02:47.038027] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:00.234 [2024-05-15 01:02:47.038042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:00.234 request: 00:26:00.234 { 00:26:00.234 "name": "nvme0", 00:26:00.234 "trtype": "tcp", 00:26:00.234 "traddr": "127.0.0.1", 00:26:00.234 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.234 "adrfam": "ipv4", 00:26:00.234 "trsvcid": "4420", 00:26:00.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.234 "psk": "key1", 00:26:00.234 "method": "bdev_nvme_attach_controller", 00:26:00.234 "req_id": 1 00:26:00.234 } 00:26:00.234 Got JSON-RPC error response 00:26:00.234 response: 00:26:00.234 { 00:26:00.234 "code": -32602, 00:26:00.234 "message": "Invalid parameters" 00:26:00.234 } 00:26:00.234 01:02:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:00.234 01:02:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:00.234 01:02:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:00.235 01:02:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:00.235 01:02:47 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:26:00.235 01:02:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:00.235 01:02:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:00.235 01:02:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:00.235 01:02:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:00.235 01:02:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:00.493 01:02:47 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:26:00.493 01:02:47 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:26:00.493 01:02:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:00.493 01:02:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:00.493 01:02:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:00.493 01:02:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:00.493 01:02:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:00.752 01:02:47 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:00.752 01:02:47 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:26:00.752 01:02:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:01.010 01:02:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:26:01.010 01:02:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:01.269 01:02:48 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:26:01.269 01:02:48 keyring_file -- keyring/file.sh@77 -- # jq length 00:26:01.269 01:02:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:01.529 01:02:48 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:26:01.529 01:02:48 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.IWOpr6u02E 00:26:01.529 01:02:48 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.IWOpr6u02E 00:26:01.529 01:02:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:01.529 01:02:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.IWOpr6u02E 00:26:01.529 01:02:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:26:01.529 01:02:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:01.529 01:02:48 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:26:01.529 01:02:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:01.529 01:02:48 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IWOpr6u02E 00:26:01.529 01:02:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IWOpr6u02E 00:26:01.787 [2024-05-15 01:02:48.812476] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.IWOpr6u02E': 0100660 00:26:01.787 [2024-05-15 01:02:48.812516] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:01.787 request: 00:26:01.787 { 00:26:01.787 "name": "key0", 00:26:01.787 "path": "/tmp/tmp.IWOpr6u02E", 00:26:01.787 "method": "keyring_file_add_key", 00:26:01.787 "req_id": 1 00:26:01.787 } 00:26:01.787 Got JSON-RPC error response 00:26:01.787 response: 00:26:01.787 { 00:26:01.787 "code": -1, 00:26:01.787 "message": "Operation not permitted" 00:26:01.787 } 00:26:01.787 01:02:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:01.787 01:02:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:01.787 01:02:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:01.787 01:02:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:01.787 01:02:48 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.IWOpr6u02E 00:26:01.787 01:02:48 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IWOpr6u02E 00:26:01.787 01:02:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IWOpr6u02E 00:26:02.354 01:02:49 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.IWOpr6u02E 00:26:02.354 01:02:49 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:26:02.354 01:02:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:02.354 01:02:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:02.354 01:02:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:02.354 01:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:02.354 01:02:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:02.354 01:02:49 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:26:02.354 01:02:49 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:02.354 01:02:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:02.354 01:02:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:02.354 01:02:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:26:02.354 01:02:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.354 01:02:49 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:26:02.354 01:02:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.354 01:02:49 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:02.354 01:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:02.613 [2024-05-15 01:02:49.602608] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.IWOpr6u02E': No such file or directory 00:26:02.613 [2024-05-15 01:02:49.602650] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:02.613 [2024-05-15 01:02:49.602685] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:02.613 [2024-05-15 01:02:49.602698] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:02.613 [2024-05-15 01:02:49.602711] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:02.613 request: 00:26:02.613 { 00:26:02.613 "name": "nvme0", 00:26:02.613 "trtype": "tcp", 00:26:02.613 "traddr": "127.0.0.1", 00:26:02.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:02.613 "adrfam": "ipv4", 00:26:02.613 "trsvcid": "4420", 00:26:02.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:02.613 "psk": "key0", 00:26:02.613 "method": "bdev_nvme_attach_controller", 00:26:02.613 "req_id": 1 00:26:02.613 } 00:26:02.613 Got JSON-RPC error response 00:26:02.613 response: 00:26:02.613 { 00:26:02.613 "code": -19, 00:26:02.613 "message": "No such device" 00:26:02.613 } 00:26:02.613 01:02:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:02.613 01:02:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:02.613 01:02:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:02.613 01:02:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:02.613 01:02:49 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:26:02.613 01:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:02.871 01:02:49 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:02.871 01:02:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:02.872 01:02:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:02.872 01:02:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:02.872 01:02:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:02.872 01:02:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:02.872 01:02:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9Kxi93QbS9 00:26:02.872 01:02:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:02.872 01:02:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:02.872 01:02:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:02.872 01:02:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:02.872 01:02:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:02.872 01:02:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:02.872 01:02:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:02.872 01:02:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9Kxi93QbS9 00:26:02.872 01:02:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9Kxi93QbS9 00:26:02.872 01:02:49 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.9Kxi93QbS9 00:26:02.872 01:02:49 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9Kxi93QbS9 00:26:02.872 01:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9Kxi93QbS9 00:26:03.135 01:02:50 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:03.135 01:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:03.395 nvme0n1 00:26:03.653 01:02:50 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:26:03.653 01:02:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:03.653 01:02:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:03.653 01:02:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:03.653 01:02:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:03.653 01:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:03.911 01:02:50 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:26:03.911 01:02:50 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:26:03.911 01:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:04.170 01:02:51 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:26:04.170 01:02:51 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:26:04.170 01:02:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:04.170 01:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:04.170 01:02:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:04.428 01:02:51 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:26:04.428 01:02:51 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:26:04.428 01:02:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:04.428 01:02:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:04.428 01:02:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:04.428 01:02:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:04.428 01:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:04.686 01:02:51 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:26:04.686 01:02:51 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:04.686 01:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:04.944 01:02:51 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:26:04.944 01:02:51 keyring_file -- keyring/file.sh@104 -- # jq length 00:26:04.944 01:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:05.202 01:02:52 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:26:05.203 01:02:52 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9Kxi93QbS9 00:26:05.203 01:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9Kxi93QbS9 00:26:05.461 01:02:52 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gUXHJ6AVmA 00:26:05.461 01:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gUXHJ6AVmA 00:26:05.719 01:02:52 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:05.719 01:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:06.286 nvme0n1 00:26:06.286 01:02:53 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:26:06.286 01:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:06.546 01:02:53 keyring_file -- keyring/file.sh@112 -- # config='{ 00:26:06.546 "subsystems": [ 00:26:06.546 { 00:26:06.546 "subsystem": "keyring", 00:26:06.546 "config": [ 00:26:06.546 { 00:26:06.546 "method": "keyring_file_add_key", 00:26:06.546 "params": { 00:26:06.546 "name": "key0", 00:26:06.546 "path": "/tmp/tmp.9Kxi93QbS9" 00:26:06.546 } 00:26:06.546 }, 00:26:06.546 { 00:26:06.546 "method": "keyring_file_add_key", 00:26:06.546 "params": { 00:26:06.546 "name": "key1", 00:26:06.546 "path": "/tmp/tmp.gUXHJ6AVmA" 00:26:06.546 } 00:26:06.546 } 00:26:06.546 ] 00:26:06.546 }, 00:26:06.546 { 00:26:06.546 "subsystem": "iobuf", 00:26:06.546 "config": [ 00:26:06.546 { 00:26:06.546 "method": "iobuf_set_options", 00:26:06.547 "params": { 00:26:06.547 "small_pool_count": 8192, 00:26:06.547 "large_pool_count": 1024, 00:26:06.547 "small_bufsize": 8192, 00:26:06.547 "large_bufsize": 135168 00:26:06.547 } 00:26:06.547 } 00:26:06.547 ] 00:26:06.547 }, 00:26:06.547 { 00:26:06.547 "subsystem": "sock", 00:26:06.547 "config": [ 00:26:06.547 { 00:26:06.547 "method": "sock_impl_set_options", 00:26:06.547 "params": { 00:26:06.547 "impl_name": "posix", 00:26:06.547 "recv_buf_size": 2097152, 00:26:06.547 "send_buf_size": 2097152, 00:26:06.547 "enable_recv_pipe": true, 00:26:06.547 "enable_quickack": false, 00:26:06.547 "enable_placement_id": 0, 00:26:06.547 "enable_zerocopy_send_server": true, 00:26:06.547 "enable_zerocopy_send_client": false, 00:26:06.547 "zerocopy_threshold": 0, 00:26:06.547 "tls_version": 0, 00:26:06.547 "enable_ktls": false 00:26:06.547 } 00:26:06.547 }, 00:26:06.547 { 00:26:06.547 "method": "sock_impl_set_options", 00:26:06.547 "params": { 00:26:06.547 "impl_name": "ssl", 00:26:06.547 "recv_buf_size": 4096, 00:26:06.547 "send_buf_size": 4096, 00:26:06.547 "enable_recv_pipe": true, 00:26:06.547 "enable_quickack": false, 00:26:06.547 "enable_placement_id": 0, 00:26:06.547 "enable_zerocopy_send_server": true, 00:26:06.547 "enable_zerocopy_send_client": false, 00:26:06.547 "zerocopy_threshold": 0, 00:26:06.547 "tls_version": 0, 00:26:06.547 "enable_ktls": false 00:26:06.547 } 00:26:06.547 } 00:26:06.547 ] 00:26:06.547 }, 00:26:06.547 { 00:26:06.547 "subsystem": "vmd", 00:26:06.547 "config": [] 00:26:06.547 }, 00:26:06.547 { 00:26:06.547 "subsystem": "accel", 00:26:06.547 "config": [ 00:26:06.547 { 00:26:06.547 "method": "accel_set_options", 00:26:06.547 "params": { 00:26:06.547 "small_cache_size": 128, 00:26:06.547 "large_cache_size": 16, 00:26:06.547 "task_count": 2048, 00:26:06.547 "sequence_count": 2048, 00:26:06.547 "buf_count": 2048 00:26:06.547 } 00:26:06.547 } 00:26:06.547 ] 00:26:06.547 }, 00:26:06.547 { 00:26:06.547 "subsystem": "bdev", 00:26:06.547 "config": [ 00:26:06.547 { 00:26:06.547 "method": "bdev_set_options", 00:26:06.547 "params": { 00:26:06.547 "bdev_io_pool_size": 65535, 00:26:06.547 "bdev_io_cache_size": 256, 00:26:06.547 "bdev_auto_examine": true, 00:26:06.547 "iobuf_small_cache_size": 128, 00:26:06.547 "iobuf_large_cache_size": 16 00:26:06.547 } 00:26:06.547 }, 00:26:06.547 { 00:26:06.547 "method": "bdev_raid_set_options", 00:26:06.547 "params": { 00:26:06.547 "process_window_size_kb": 1024 00:26:06.547 } 00:26:06.547 }, 00:26:06.547 { 00:26:06.547 "method": "bdev_iscsi_set_options", 00:26:06.547 "params": { 00:26:06.547 "timeout_sec": 30 00:26:06.547 } 00:26:06.547 }, 00:26:06.547 { 00:26:06.547 "method": "bdev_nvme_set_options", 00:26:06.547 "params": { 00:26:06.547 "action_on_timeout": "none", 00:26:06.547 "timeout_us": 0, 00:26:06.547 "timeout_admin_us": 0, 00:26:06.547 "keep_alive_timeout_ms": 10000, 00:26:06.547 "arbitration_burst": 0, 00:26:06.547 "low_priority_weight": 0, 00:26:06.547 "medium_priority_weight": 0, 00:26:06.547 "high_priority_weight": 0, 00:26:06.547 "nvme_adminq_poll_period_us": 10000, 00:26:06.547 "nvme_ioq_poll_period_us": 0, 00:26:06.547 "io_queue_requests": 512, 00:26:06.547 "delay_cmd_submit": true, 00:26:06.547 "transport_retry_count": 4, 00:26:06.547 "bdev_retry_count": 3, 00:26:06.547 "transport_ack_timeout": 0, 00:26:06.547 "ctrlr_loss_timeout_sec": 0, 00:26:06.547 "reconnect_delay_sec": 0, 00:26:06.547 "fast_io_fail_timeout_sec": 0, 00:26:06.547 "disable_auto_failback": false, 00:26:06.547 "generate_uuids": false, 00:26:06.547 "transport_tos": 0, 00:26:06.547 "nvme_error_stat": false, 00:26:06.547 "rdma_srq_size": 0, 00:26:06.547 "io_path_stat": false, 00:26:06.547 "allow_accel_sequence": false, 00:26:06.547 "rdma_max_cq_size": 0, 00:26:06.547 "rdma_cm_event_timeout_ms": 0, 00:26:06.547 "dhchap_digests": [ 00:26:06.547 "sha256", 00:26:06.547 "sha384", 00:26:06.547 "sha512" 00:26:06.547 ], 00:26:06.547 "dhchap_dhgroups": [ 00:26:06.547 "null", 00:26:06.547 "ffdhe2048", 00:26:06.547 "ffdhe3072", 00:26:06.547 "ffdhe4096", 00:26:06.547 "ffdhe6144", 00:26:06.547 "ffdhe8192" 00:26:06.547 ] 00:26:06.547 } 00:26:06.547 }, 00:26:06.547 { 00:26:06.547 "method": "bdev_nvme_attach_controller", 00:26:06.547 "params": { 00:26:06.547 "name": "nvme0", 00:26:06.547 "trtype": "TCP", 00:26:06.547 "adrfam": "IPv4", 00:26:06.547 "traddr": "127.0.0.1", 00:26:06.547 "trsvcid": "4420", 00:26:06.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.547 "prchk_reftag": false, 00:26:06.547 "prchk_guard": false, 00:26:06.547 "ctrlr_loss_timeout_sec": 0, 00:26:06.547 "reconnect_delay_sec": 0, 00:26:06.547 "fast_io_fail_timeout_sec": 0, 00:26:06.547 "psk": "key0", 00:26:06.547 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.547 "hdgst": false, 00:26:06.547 "ddgst": false 00:26:06.548 } 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "method": "bdev_nvme_set_hotplug", 00:26:06.548 "params": { 00:26:06.548 "period_us": 100000, 00:26:06.548 "enable": false 00:26:06.548 } 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "method": "bdev_wait_for_examine" 00:26:06.548 } 00:26:06.548 ] 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "subsystem": "nbd", 00:26:06.548 "config": [] 00:26:06.548 } 00:26:06.548 ] 00:26:06.548 }' 00:26:06.548 01:02:53 keyring_file -- keyring/file.sh@114 -- # killprocess 4108239 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 4108239 ']' 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@950 -- # kill -0 4108239 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@951 -- # uname 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4108239 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4108239' 00:26:06.548 killing process with pid 4108239 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@965 -- # kill 4108239 00:26:06.548 Received shutdown signal, test time was about 1.000000 seconds 00:26:06.548 00:26:06.548 Latency(us) 00:26:06.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.548 =================================================================================================================== 00:26:06.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@970 -- # wait 4108239 00:26:06.548 01:02:53 keyring_file -- keyring/file.sh@117 -- # bperfpid=4109392 00:26:06.548 01:02:53 keyring_file -- keyring/file.sh@119 -- # waitforlisten 4109392 /var/tmp/bperf.sock 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 4109392 ']' 00:26:06.548 01:02:53 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.548 01:02:53 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:26:06.548 "subsystems": [ 00:26:06.548 { 00:26:06.548 "subsystem": "keyring", 00:26:06.548 "config": [ 00:26:06.548 { 00:26:06.548 "method": "keyring_file_add_key", 00:26:06.548 "params": { 00:26:06.548 "name": "key0", 00:26:06.548 "path": "/tmp/tmp.9Kxi93QbS9" 00:26:06.548 } 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "method": "keyring_file_add_key", 00:26:06.548 "params": { 00:26:06.548 "name": "key1", 00:26:06.548 "path": "/tmp/tmp.gUXHJ6AVmA" 00:26:06.548 } 00:26:06.548 } 00:26:06.548 ] 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "subsystem": "iobuf", 00:26:06.548 "config": [ 00:26:06.548 { 00:26:06.548 "method": "iobuf_set_options", 00:26:06.548 "params": { 00:26:06.548 "small_pool_count": 8192, 00:26:06.548 "large_pool_count": 1024, 00:26:06.548 "small_bufsize": 8192, 00:26:06.548 "large_bufsize": 135168 00:26:06.548 } 00:26:06.548 } 00:26:06.548 ] 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "subsystem": "sock", 00:26:06.548 "config": [ 00:26:06.548 { 00:26:06.548 "method": "sock_impl_set_options", 00:26:06.548 "params": { 00:26:06.548 "impl_name": "posix", 00:26:06.548 "recv_buf_size": 2097152, 00:26:06.548 "send_buf_size": 2097152, 00:26:06.548 "enable_recv_pipe": true, 00:26:06.548 "enable_quickack": false, 00:26:06.548 "enable_placement_id": 0, 00:26:06.548 "enable_zerocopy_send_server": true, 00:26:06.548 "enable_zerocopy_send_client": false, 00:26:06.548 "zerocopy_threshold": 0, 00:26:06.548 "tls_version": 0, 00:26:06.548 "enable_ktls": false 00:26:06.548 } 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "method": "sock_impl_set_options", 00:26:06.548 "params": { 00:26:06.548 "impl_name": "ssl", 00:26:06.548 "recv_buf_size": 4096, 00:26:06.548 "send_buf_size": 4096, 00:26:06.548 "enable_recv_pipe": true, 00:26:06.548 "enable_quickack": false, 00:26:06.548 "enable_placement_id": 0, 00:26:06.548 "enable_zerocopy_send_server": true, 00:26:06.548 "enable_zerocopy_send_client": false, 00:26:06.548 "zerocopy_threshold": 0, 00:26:06.548 "tls_version": 0, 00:26:06.548 "enable_ktls": false 00:26:06.548 } 00:26:06.548 } 00:26:06.548 ] 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "subsystem": "vmd", 00:26:06.548 "config": [] 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "subsystem": "accel", 00:26:06.548 "config": [ 00:26:06.548 { 00:26:06.548 "method": "accel_set_options", 00:26:06.548 "params": { 00:26:06.548 "small_cache_size": 128, 00:26:06.548 "large_cache_size": 16, 00:26:06.548 "task_count": 2048, 00:26:06.548 "sequence_count": 2048, 00:26:06.548 "buf_count": 2048 00:26:06.548 } 00:26:06.548 } 00:26:06.548 ] 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "subsystem": "bdev", 00:26:06.548 "config": [ 00:26:06.548 { 00:26:06.548 "method": "bdev_set_options", 00:26:06.548 "params": { 00:26:06.548 "bdev_io_pool_size": 65535, 00:26:06.548 "bdev_io_cache_size": 256, 00:26:06.548 "bdev_auto_examine": true, 00:26:06.548 "iobuf_small_cache_size": 128, 00:26:06.548 "iobuf_large_cache_size": 16 00:26:06.548 } 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "method": "bdev_raid_set_options", 00:26:06.548 "params": { 00:26:06.548 "process_window_size_kb": 1024 00:26:06.548 } 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "method": "bdev_iscsi_set_options", 00:26:06.548 "params": { 00:26:06.548 "timeout_sec": 30 00:26:06.548 } 00:26:06.548 }, 00:26:06.548 { 00:26:06.548 "method": "bdev_nvme_set_options", 00:26:06.548 "params": { 00:26:06.548 "action_on_timeout": "none", 00:26:06.548 "timeout_us": 0, 00:26:06.548 "timeout_admin_us": 0, 00:26:06.548 "keep_alive_timeout_ms": 10000, 00:26:06.548 "arbitration_burst": 0, 00:26:06.549 "low_priority_weight": 0, 00:26:06.549 "medium_priority_weight": 0, 00:26:06.549 "high_priority_weight": 0, 00:26:06.549 "nvme_adminq_poll_period_us": 10000, 00:26:06.549 "nvme_ioq_poll_period_us": 0, 00:26:06.549 "io_queue_requests": 512, 00:26:06.549 "delay_cmd_submit": true, 00:26:06.549 "transport_retry_count": 4, 00:26:06.549 "bdev_retry_count": 3, 00:26:06.549 "transport_ack_timeout": 0, 00:26:06.549 "ctrlr_loss_timeout_sec": 0, 00:26:06.549 "reconnect_delay_sec": 0, 00:26:06.549 "fast_io_fail_timeout_sec": 0, 00:26:06.549 "disable_auto_failback": false, 00:26:06.549 "generate_uuids": false, 00:26:06.549 "transport_tos": 0, 00:26:06.549 "nvme_error_stat": false, 00:26:06.549 "rdma_srq_size": 0, 00:26:06.549 "io_path_stat": false, 00:26:06.549 "allow_accel_sequence": false, 00:26:06.549 "rdma_max_cq_size": 0, 00:26:06.549 "rdma_cm_event_timeout_ms": 0, 00:26:06.549 "dhchap_digests": [ 00:26:06.549 "sha256", 00:26:06.549 "sha384", 00:26:06.549 "sha512" 00:26:06.549 ], 00:26:06.549 "dhchap_dhgroups": [ 00:26:06.549 "null", 00:26:06.549 "ffdhe2048", 00:26:06.549 "ffdhe3072", 00:26:06.549 "ffdhe4096", 00:26:06.549 "ffdhe6144", 00:26:06.549 "ffdhe8192" 00:26:06.549 ] 00:26:06.549 } 00:26:06.549 }, 00:26:06.549 { 00:26:06.549 "method": "bdev_nvme_attach_controller", 00:26:06.549 "params": { 00:26:06.549 "name": "nvme0", 00:26:06.549 "trtype": "TCP", 00:26:06.549 "adrfam": "IPv4", 00:26:06.549 "traddr": "127.0.0.1", 00:26:06.549 "trsvcid": "4420", 00:26:06.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.549 "prchk_reftag": false, 00:26:06.549 "prchk_guard": false, 00:26:06.549 "ctrlr_loss_timeout_sec": 0, 00:26:06.549 "reconnect_delay_sec": 0, 00:26:06.549 "fast_io_fail_timeout_sec": 0, 00:26:06.549 "psk": "key0", 00:26:06.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.549 "hdgst": false, 00:26:06.549 "ddgst": false 00:26:06.549 } 00:26:06.549 }, 00:26:06.549 { 00:26:06.549 "method": "bdev_nvme_set_hotplug", 00:26:06.549 "params": { 00:26:06.549 "period_us": 100000, 00:26:06.549 "enable": false 00:26:06.549 } 00:26:06.549 }, 00:26:06.549 { 00:26:06.549 "method": "bdev_wait_for_examine" 00:26:06.549 } 00:26:06.549 ] 00:26:06.549 }, 00:26:06.549 { 00:26:06.549 "subsystem": "nbd", 00:26:06.549 "config": [] 00:26:06.549 } 00:26:06.549 ] 00:26:06.549 }' 00:26:06.549 01:02:53 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:06.549 01:02:53 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:06.549 01:02:53 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.549 01:02:53 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:06.549 01:02:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:06.810 [2024-05-15 01:02:53.647966] Starting SPDK v24.05-pre git sha1 c06b0c79b / DPDK 23.11.0 initialization... 00:26:06.810 [2024-05-15 01:02:53.648060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109392 ] 00:26:06.810 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.810 [2024-05-15 01:02:53.712276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.810 [2024-05-15 01:02:53.828877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.069 [2024-05-15 01:02:53.993674] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:07.634 01:02:54 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:07.634 01:02:54 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:26:07.634 01:02:54 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:26:07.634 01:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:07.634 01:02:54 keyring_file -- keyring/file.sh@120 -- # jq length 00:26:07.892 01:02:54 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:26:07.892 01:02:54 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:26:07.892 01:02:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:07.892 01:02:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:07.892 01:02:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:07.892 01:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:07.892 01:02:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:08.150 01:02:55 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:08.150 01:02:55 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:26:08.150 01:02:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:08.150 01:02:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:08.150 01:02:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:08.150 01:02:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:08.150 01:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:08.409 01:02:55 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:26:08.409 01:02:55 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:26:08.409 01:02:55 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:26:08.409 01:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:08.667 01:02:55 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:26:08.667 01:02:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:26:08.667 01:02:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9Kxi93QbS9 /tmp/tmp.gUXHJ6AVmA 00:26:08.667 01:02:55 keyring_file -- keyring/file.sh@20 -- # killprocess 4109392 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 4109392 ']' 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@950 -- # kill -0 4109392 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@951 -- # uname 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4109392 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4109392' 00:26:08.667 killing process with pid 4109392 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@965 -- # kill 4109392 00:26:08.667 Received shutdown signal, test time was about 1.000000 seconds 00:26:08.667 00:26:08.667 Latency(us) 00:26:08.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.667 =================================================================================================================== 00:26:08.667 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:08.667 01:02:55 keyring_file -- common/autotest_common.sh@970 -- # wait 4109392 00:26:08.926 01:02:55 keyring_file -- keyring/file.sh@21 -- # killprocess 4108145 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 4108145 ']' 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@950 -- # kill -0 4108145 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@951 -- # uname 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4108145 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4108145' 00:26:08.926 killing process with pid 4108145 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@965 -- # kill 4108145 00:26:08.926 [2024-05-15 01:02:55.872955] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:08.926 [2024-05-15 01:02:55.873012] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:08.926 01:02:55 keyring_file -- common/autotest_common.sh@970 -- # wait 4108145 00:26:09.185 00:26:09.185 real 0m14.811s 00:26:09.185 user 0m37.247s 00:26:09.185 sys 0m3.177s 00:26:09.185 01:02:56 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:09.185 01:02:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:09.185 ************************************ 00:26:09.185 END TEST keyring_file 00:26:09.186 ************************************ 00:26:09.186 01:02:56 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:26:09.186 01:02:56 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:26:09.186 01:02:56 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:26:09.186 01:02:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:26:09.186 01:02:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:26:09.186 01:02:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:26:09.186 01:02:56 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:26:09.186 01:02:56 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:26:09.186 01:02:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:09.186 01:02:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.186 01:02:56 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:26:09.186 01:02:56 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:26:09.186 01:02:56 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:26:09.186 01:02:56 -- common/autotest_common.sh@10 -- # set +x 00:26:11.091 INFO: APP EXITING 00:26:11.091 INFO: killing all VMs 00:26:11.091 INFO: killing vhost app 00:26:11.091 WARN: no vhost pid file found 00:26:11.091 INFO: EXIT DONE 00:26:11.662 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:26:11.662 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:26:11.662 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:26:11.662 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:26:11.662 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:26:11.662 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:26:11.922 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:26:11.922 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:26:11.922 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:26:11.922 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:26:11.922 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:26:11.922 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:26:11.922 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:26:11.922 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:26:11.922 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:26:11.922 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:26:11.922 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:26:12.862 Cleaning 00:26:12.862 Removing: /var/run/dpdk/spdk0/config 00:26:12.862 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:12.862 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:12.862 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:12.862 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:12.862 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:26:12.862 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:26:12.862 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:26:12.862 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:26:12.862 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:12.862 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:12.862 Removing: /var/run/dpdk/spdk1/config 00:26:12.862 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:12.862 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:12.862 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:12.862 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:12.862 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:26:12.862 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:26:12.862 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:26:12.862 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:26:12.862 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:12.862 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:12.862 Removing: /var/run/dpdk/spdk1/mp_socket 00:26:12.862 Removing: /var/run/dpdk/spdk2/config 00:26:12.862 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:12.862 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:12.862 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:12.862 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:12.862 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:26:12.862 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:26:12.862 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:26:12.862 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:26:12.862 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:12.862 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:12.862 Removing: /var/run/dpdk/spdk3/config 00:26:12.862 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:12.862 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:12.862 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:13.122 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:13.122 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:26:13.122 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:26:13.122 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:26:13.122 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:26:13.122 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:13.122 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:13.122 Removing: /var/run/dpdk/spdk4/config 00:26:13.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:13.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:13.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:13.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:13.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:26:13.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:26:13.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:26:13.122 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:26:13.122 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:13.122 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:13.122 Removing: /dev/shm/bdev_svc_trace.1 00:26:13.122 Removing: /dev/shm/nvmf_trace.0 00:26:13.122 Removing: /dev/shm/spdk_tgt_trace.pid3927828 00:26:13.122 Removing: /var/run/dpdk/spdk0 00:26:13.122 Removing: /var/run/dpdk/spdk1 00:26:13.122 Removing: /var/run/dpdk/spdk2 00:26:13.122 Removing: /var/run/dpdk/spdk3 00:26:13.122 Removing: /var/run/dpdk/spdk4 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3926603 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3927173 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3927828 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3928201 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3928728 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3928835 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3929384 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3929406 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3929616 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3930643 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3931360 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3931603 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3931758 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3931928 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3932097 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3932226 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3932349 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3932588 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3932939 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3934977 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3935107 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3935237 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3935334 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3935575 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3935672 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3935923 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3936010 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3936150 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3936246 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3936376 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3936387 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3936784 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3936908 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3937067 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3937203 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3937236 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3937384 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3937509 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3937718 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3937847 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3937969 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3938182 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3938305 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3938439 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3938648 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3938767 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3938894 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3939105 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3939233 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3939352 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3939564 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3939693 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3939820 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3940025 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3940164 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3940284 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3940495 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3940567 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3940739 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3942344 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3962547 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3964478 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3969947 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3972481 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3974324 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3974702 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3981001 00:26:13.122 Removing: /var/run/dpdk/spdk_pid3981091 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3981500 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3981997 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3982496 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3982801 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3982812 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3983004 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3983105 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3983113 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3983605 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3984025 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3984520 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3984825 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3984918 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3985030 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3985810 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3986373 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3990464 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3990682 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3992635 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3995574 00:26:13.381 Removing: /var/run/dpdk/spdk_pid3997247 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4002805 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4006816 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4007723 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4008320 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4016126 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4017841 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4020005 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4020903 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4021921 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4022018 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4022125 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4022229 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4022562 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4023568 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4024134 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4024385 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4025708 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4026039 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4026448 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4028447 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4033388 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4035438 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4038414 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4039175 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4040141 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4042137 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4043882 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4047151 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4047159 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4049306 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4049408 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4049506 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4049798 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4049803 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4051746 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4052003 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4054054 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4055561 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4058306 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4061524 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4066625 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4070000 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4070002 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4079712 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4080110 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4080428 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4080825 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4081278 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4081598 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4081991 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4082311 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4084250 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4084361 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4087899 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4088032 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4089287 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4093162 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4093195 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4095422 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4096493 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4097643 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4098216 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4099281 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4099863 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4104023 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4104251 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4104626 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4105768 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4106073 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4106292 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4108145 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4108239 00:26:13.381 Removing: /var/run/dpdk/spdk_pid4109392 00:26:13.381 Clean 00:26:13.639 01:03:00 -- common/autotest_common.sh@1447 -- # return 0 00:26:13.639 01:03:00 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:26:13.639 01:03:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.639 01:03:00 -- common/autotest_common.sh@10 -- # set +x 00:26:13.639 01:03:00 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:26:13.639 01:03:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.639 01:03:00 -- common/autotest_common.sh@10 -- # set +x 00:26:13.639 01:03:00 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:26:13.639 01:03:00 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:26:13.639 01:03:00 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:26:13.639 01:03:00 -- spdk/autotest.sh@387 -- # hash lcov 00:26:13.639 01:03:00 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:13.639 01:03:00 -- spdk/autotest.sh@389 -- # hostname 00:26:13.639 01:03:00 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:26:13.898 geninfo: WARNING: invalid characters removed from testname! 00:26:45.990 01:03:28 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:47.371 01:03:34 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:50.774 01:03:37 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:53.303 01:03:40 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:57.486 01:03:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:00.767 01:03:47 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:04.050 01:03:50 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:04.050 01:03:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.050 01:03:50 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:04.050 01:03:50 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.050 01:03:50 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.050 01:03:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.050 01:03:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.050 01:03:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.050 01:03:50 -- paths/export.sh@5 -- $ export PATH 00:27:04.050 01:03:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.050 01:03:50 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:27:04.050 01:03:50 -- common/autobuild_common.sh@437 -- $ date +%s 00:27:04.050 01:03:50 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715727830.XXXXXX 00:27:04.050 01:03:50 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715727830.ssBIKz 00:27:04.050 01:03:50 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:27:04.050 01:03:50 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:27:04.050 01:03:50 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:27:04.050 01:03:50 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:27:04.050 01:03:50 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:27:04.050 01:03:50 -- common/autobuild_common.sh@453 -- $ get_config_params 00:27:04.050 01:03:50 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:27:04.050 01:03:50 -- common/autotest_common.sh@10 -- $ set +x 00:27:04.050 01:03:50 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:27:04.050 01:03:50 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:27:04.050 01:03:50 -- pm/common@17 -- $ local monitor 00:27:04.050 01:03:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:04.050 01:03:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:04.050 01:03:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:04.050 01:03:50 -- pm/common@21 -- $ date +%s 00:27:04.050 01:03:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:04.050 01:03:50 -- pm/common@21 -- $ date +%s 00:27:04.050 01:03:50 -- pm/common@25 -- $ sleep 1 00:27:04.050 01:03:50 -- pm/common@21 -- $ date +%s 00:27:04.050 01:03:50 -- pm/common@21 -- $ date +%s 00:27:04.050 01:03:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715727830 00:27:04.050 01:03:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715727830 00:27:04.050 01:03:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715727830 00:27:04.050 01:03:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715727830 00:27:04.051 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715727830_collect-vmstat.pm.log 00:27:04.051 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715727830_collect-cpu-load.pm.log 00:27:04.051 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715727830_collect-cpu-temp.pm.log 00:27:04.051 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715727830_collect-bmc-pm.bmc.pm.log 00:27:04.992 01:03:51 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:27:04.992 01:03:51 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:27:04.992 01:03:51 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:04.992 01:03:51 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:04.992 01:03:51 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:04.992 01:03:51 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:04.992 01:03:51 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:04.992 01:03:51 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:04.992 01:03:51 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:04.992 01:03:51 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:04.992 01:03:51 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:04.992 01:03:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:27:04.992 01:03:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:27:04.992 01:03:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:04.992 01:03:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:27:04.992 01:03:51 -- pm/common@44 -- $ pid=4117839 00:27:04.992 01:03:51 -- pm/common@50 -- $ kill -TERM 4117839 00:27:04.992 01:03:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:04.992 01:03:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:27:04.992 01:03:51 -- pm/common@44 -- $ pid=4117841 00:27:04.992 01:03:51 -- pm/common@50 -- $ kill -TERM 4117841 00:27:04.992 01:03:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:04.992 01:03:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:27:04.992 01:03:51 -- pm/common@44 -- $ pid=4117843 00:27:04.992 01:03:51 -- pm/common@50 -- $ kill -TERM 4117843 00:27:04.992 01:03:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:04.992 01:03:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:27:04.992 01:03:51 -- pm/common@44 -- $ pid=4117877 00:27:04.992 01:03:51 -- pm/common@50 -- $ sudo -E kill -TERM 4117877 00:27:04.992 + [[ -n 3850728 ]] 00:27:04.992 + sudo kill 3850728 00:27:05.002 [Pipeline] } 00:27:05.020 [Pipeline] // stage 00:27:05.025 [Pipeline] } 00:27:05.042 [Pipeline] // timeout 00:27:05.047 [Pipeline] } 00:27:05.066 [Pipeline] // catchError 00:27:05.073 [Pipeline] } 00:27:05.090 [Pipeline] // wrap 00:27:05.096 [Pipeline] } 00:27:05.111 [Pipeline] // catchError 00:27:05.119 [Pipeline] stage 00:27:05.121 [Pipeline] { (Epilogue) 00:27:05.135 [Pipeline] catchError 00:27:05.136 [Pipeline] { 00:27:05.151 [Pipeline] echo 00:27:05.152 Cleanup processes 00:27:05.158 [Pipeline] sh 00:27:05.443 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:05.443 4118002 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:27:05.443 4118058 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:05.457 [Pipeline] sh 00:27:05.741 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:05.741 ++ grep -v 'sudo pgrep' 00:27:05.741 ++ awk '{print $1}' 00:27:05.741 + sudo kill -9 4118002 00:27:05.754 [Pipeline] sh 00:27:06.037 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:16.033 [Pipeline] sh 00:27:16.319 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:16.319 Artifacts sizes are good 00:27:16.331 [Pipeline] archiveArtifacts 00:27:16.339 Archiving artifacts 00:27:16.550 [Pipeline] sh 00:27:16.862 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:27:16.878 [Pipeline] cleanWs 00:27:16.888 [WS-CLEANUP] Deleting project workspace... 00:27:16.888 [WS-CLEANUP] Deferred wipeout is used... 00:27:16.896 [WS-CLEANUP] done 00:27:16.898 [Pipeline] } 00:27:16.918 [Pipeline] // catchError 00:27:16.931 [Pipeline] sh 00:27:17.211 + logger -p user.info -t JENKINS-CI 00:27:17.220 [Pipeline] } 00:27:17.237 [Pipeline] // stage 00:27:17.243 [Pipeline] } 00:27:17.260 [Pipeline] // node 00:27:17.265 [Pipeline] End of Pipeline 00:27:17.301 Finished: SUCCESS